url
stringlengths 62
66
| repository_url
stringclasses 1
value | labels_url
stringlengths 76
80
| comments_url
stringlengths 71
75
| events_url
stringlengths 69
73
| html_url
stringlengths 50
56
| id
int64 377M
2.15B
| node_id
stringlengths 18
32
| number
int64 1
29.2k
| title
stringlengths 1
487
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 2
classes | assignee
dict | assignees
list | comments
sequence | created_at
int64 1.54k
1.71k
| updated_at
int64 1.54k
1.71k
| closed_at
int64 1.54k
1.71k
⌀ | author_association
stringclasses 4
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
234k
⌀ | reactions
dict | timeline_url
stringlengths 71
75
| state_reason
stringclasses 3
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/transformers/issues/26731 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26731/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26731/comments | https://api.github.com/repos/huggingface/transformers/issues/26731/events | https://github.com/huggingface/transformers/pull/26731 | 1,937,454,770 | PR_kwDOCUB6oc5cf23d | 26,731 | fix the model card issue as `use_cuda_amp` is no more available | {
"login": "pacman100",
"id": 13534540,
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pacman100",
"html_url": "https://github.com/pacman100",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"repos_url": "https://api.github.com/users/pacman100/repos",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,697 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
1. More than 70 tests are currently failing because the function `extract_hyperparameters_from_trainer` tries to access `use_cuda_amp` attribute which is no more available. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26731/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26731",
"html_url": "https://github.com/huggingface/transformers/pull/26731",
"diff_url": "https://github.com/huggingface/transformers/pull/26731.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26731.patch",
"merged_at": 1697032703000
} |
https://api.github.com/repos/huggingface/transformers/issues/26730 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26730/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26730/comments | https://api.github.com/repos/huggingface/transformers/issues/26730/events | https://github.com/huggingface/transformers/issues/26730 | 1,937,360,645 | I_kwDOCUB6oc5zeccF | 26,730 | can we support LLM about quantization int4 | {
"login": "toby911",
"id": 112366887,
"node_id": "U_kgDOBrKVJw",
"avatar_url": "https://avatars.githubusercontent.com/u/112366887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/toby911",
"html_url": "https://github.com/toby911",
"followers_url": "https://api.github.com/users/toby911/followers",
"following_url": "https://api.github.com/users/toby911/following{/other_user}",
"gists_url": "https://api.github.com/users/toby911/gists{/gist_id}",
"starred_url": "https://api.github.com/users/toby911/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/toby911/subscriptions",
"organizations_url": "https://api.github.com/users/toby911/orgs",
"repos_url": "https://api.github.com/users/toby911/repos",
"events_url": "https://api.github.com/users/toby911/events{/privacy}",
"received_events_url": "https://api.github.com/users/toby911/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### Feature request
For individual users, due to limited resources, we have particularly high requirements for the int4 model
### Motivation
For individual users, due to limited resources, we have particularly high requirements for the int4 model
### Your contribution
For individual users, due to limited resources, we have particularly high requirements for the int4 model | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26730/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26730/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26729 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26729/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26729/comments | https://api.github.com/repos/huggingface/transformers/issues/26729/events | https://github.com/huggingface/transformers/issues/26729 | 1,937,225,106 | I_kwDOCUB6oc5zd7WS | 26,729 | Wrongly load .safetensors weights because it loads only depends on file names not weight files | {
"login": "Rickylht",
"id": 38951435,
"node_id": "MDQ6VXNlcjM4OTUxNDM1",
"avatar_url": "https://avatars.githubusercontent.com/u/38951435?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rickylht",
"html_url": "https://github.com/Rickylht",
"followers_url": "https://api.github.com/users/Rickylht/followers",
"following_url": "https://api.github.com/users/Rickylht/following{/other_user}",
"gists_url": "https://api.github.com/users/Rickylht/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rickylht/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rickylht/subscriptions",
"organizations_url": "https://api.github.com/users/Rickylht/orgs",
"repos_url": "https://api.github.com/users/Rickylht/repos",
"events_url": "https://api.github.com/users/Rickylht/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rickylht/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hmmm indeed, would you like to offer a fix?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | NONE | null | ### System Info
[2023-10-11 16:31:10,112] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.31.0
- Platform: Linux-5.15.0-70-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
@gante @Narsil
If there is a file(e.g. json file) with a name "...safetensor...*" in my model directory, it will automatically load .safetensor weights. As there is no .safetensor weights but only .bin weights in my directory.
I found this is because in the following .py
`/opt/conda/envs/llava/lib/python3.10/site-packages/transformers/modeling_utils.py`
line 112: `_safetensors_available = _is_package_available("safetensors")`
line 41: `def _is_package_available(pkg_name: str, return_version: bool = False) -> Union[Tuple[bool, str], bool]:`
It only depends on the file name not the existence of weights file.
I think it is better to check whether '.safetensor' at the end of the file names since the corresponding json file of safetensor named as 'model.safetensors.index.json'.
### Expected behavior
If there is no .safetensor weights in my dir, don`t load the .safetensor but load .bin weights. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26729/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26729/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26728 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26728/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26728/comments | https://api.github.com/repos/huggingface/transformers/issues/26728/events | https://github.com/huggingface/transformers/issues/26728 | 1,937,126,778 | I_kwDOCUB6oc5zdjV6 | 26,728 | Empty output when using facebook/sam-vit-base with automatic mask generation pipeline | {
"login": "sunhaozhepy",
"id": 73462159,
"node_id": "MDQ6VXNlcjczNDYyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/73462159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sunhaozhepy",
"html_url": "https://github.com/sunhaozhepy",
"followers_url": "https://api.github.com/users/sunhaozhepy/followers",
"following_url": "https://api.github.com/users/sunhaozhepy/following{/other_user}",
"gists_url": "https://api.github.com/users/sunhaozhepy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sunhaozhepy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sunhaozhepy/subscriptions",
"organizations_url": "https://api.github.com/users/sunhaozhepy/orgs",
"repos_url": "https://api.github.com/users/sunhaozhepy/repos",
"events_url": "https://api.github.com/users/sunhaozhepy/events{/privacy}",
"received_events_url": "https://api.github.com/users/sunhaozhepy/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @rafaelpadilla @younesbelkada ",
"Hi @sunhaozhepy \r\nThanks for the issue, I have just tried it with a custom image and it seems to work great on my end. Can you maybe share the image with us so that we can reproduce and try to understand what is going on?",
"@sunhaozhepy, I also couldn't reproduce the error.\r\n\r\nI tested your code with this image: \r\n```python\r\nimg_url = \"https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png\"\r\nraw_image = Image.open(requests.get(img_url, stream=True).raw).convert(\"RGB\")\r\n```\r\nWhich resulted in:\r\n\r\n\r\n```python\r\nlen(outputs[\"masks\"]), len(outputs[\"scores\"])\r\n# 48, 48\r\n```\r\nIf you share your `test.png`, we could track down the issue.",
"Hi, thanks for your replies! Here's my image `test.png`, but I strongly suspect that the image itself is not the most critical issue...\r\n\r\n<img width=\"214\" alt=\"test\" src=\"https://github.com/huggingface/transformers/assets/73462159/0308bc03-89bb-4d24-96ff-ba13cdfe5238\">\r\n\r\nAlso, I forgot to mention that for the script to be runnable in the terminal, I added a line `plt.savefig(\"masked_image.png\")` before `del mask` to save the masked image. It looks like this:\r\n\r\n\r\n\r\nBasically the same, without masks.\r\n\r\nAnd here's an error image when I run this in my environment (masked image still saved):\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/data/jupyter/qbs_lora/sam_auto.py\", line 39, in <module>\r\n show_masks_on_image(raw_image, masks)\r\n File \"/data/jupyter/qbs_lora/sam_auto.py\", line 28, in show_masks_on_image\r\n del mask\r\n ^^^^\r\nUnboundLocalError: cannot access local variable 'mask' where it is not associated with a value\r\n```\r\n\r\nWhen I ran this script in colab, neither did I get empty masks, nor the error image.",
"Hi @sunhaozhepy,\r\n\r\nThis does not seen a problem related to the model nor the transformers library. However, I tried to replicate the error using your image and, by my side, the code finishes its execution normally..\r\n\r\nFor your given image, 15 masks were found:\r\n```python\r\nlen(outputs[\"scores\"]), len(outputs[\"masks\"])\r\n# 15, 15\r\n```\r\nThe `mask` object only exists if the for loop is executed. So, if your `masks` list is empty (`[]`), the `mask` object is never created, and will raise this error. \r\n\r\nThis is likely a problem related to your environment.",
"Ok then...Do you have any suggestions for me to fix my environment? Otherwise I can try to do my experiment in Colab or try another model.",
"I checked the SAM example notebook ([here](https://github.com/huggingface/notebooks/blob/main/examples/segment_anything.ipynb)) to find if there's a memory-related explanation to delete `mask` it call GC, but it is a different code.\r\n\r\nIn your code, first you call `del mask` inside the function `show_mask`. Then in `show_masks_on_image` you're trying to`del mask` again, but `mask` was has already been deleted.\r\n\r\nAny reason why you are calling `del mask` and `gc.collect()` in `show_masks_on_image`? I think it might not be necessary.\r\n\r\n\r\n",
"It's [here](https://github.com/huggingface/notebooks/blob/main/examples/automatic_mask_generation.ipynb).\r\n\r\nNever mind the environment, I've continued my experiment on Colab, and the experiment did not turn out to have good outcomes. I'll try something else..."
] | 1,697 | 1,697 | 1,697 | NONE | null | ### System Info
- `transformers` version: 4.35.0.dev0
- Platform: Linux-5.4.15-1.el7.elrepo.x86_64-x86_64-with-glibc2.27
- Python version: 3.11.4
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes (but I grabbed 1 GPU only to my workspace bash terminal)
### Who can help?
@amyeroberts @Narsil
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Following the example notebook of automatic mask generation using SAM model, I used the following code snippet to do image segmentation:
```
import numpy as np
import matplotlib.pyplot as plt
import gc
from transformers import pipeline
from PIL import Image
import requests
def show_mask(mask, ax, random_color=False):
if random_color:
color = np.concatenate([np.random.random(3), np.array([0.6])], axis=0)
else:
color = np.array([30 / 255, 144 / 255, 255 / 255, 0.6])
h, w = mask.shape[-2:]
mask_image = mask.reshape(h, w, 1) * color.reshape(1, 1, -1)
ax.imshow(mask_image)
del mask
gc.collect()
def show_masks_on_image(raw_image, masks):
plt.imshow(np.array(raw_image))
ax = plt.gca()
ax.set_autoscale_on(False)
for mask in masks:
show_mask(mask, ax=ax, random_color=True)
plt.axis("off")
plt.show()
del mask
gc.collect()
generator = pipeline("mask-generation", model="facebook/sam-vit-base", device=0)
raw_image = Image.open("test.png")
outputs = generator(raw_image, points_per_batch=64)
masks = outputs["masks"]
show_masks_on_image(raw_image, masks)
```
The original code is provided in the form of a notebook, which I arrange as a script that can be run in the terminal.
### Expected behavior
The output is empty, and no mask is shown on the image. When I insert `print(outputs)` into the code snippet, the output in the terminal is `{'masks': [], 'scores': tensor([])}`. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26728/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26727 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26727/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26727/comments | https://api.github.com/repos/huggingface/transformers/issues/26727/events | https://github.com/huggingface/transformers/issues/26727 | 1,937,056,218 | I_kwDOCUB6oc5zdSHa | 26,727 | Error in loading model `TheBloke/Llama-2-7B-GGML` in google colab | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey! Could you share the output of `transformers-cli env`? Would help to know which version of transformers you are running! ",
"`Version: 4.34.0`",
"cc @younesbelkada I can reproduce this, might be a GPTQ command we are missing no? ",
"Hi @rajveer43 - GGML models are not natively supported in transformers core. Please refer to the model card of that model: https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML on how to run the model",
"Sure ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,697 | 1,700 | 1,700 | CONTRIBUTOR | null | ### System Info
OS: Windows 10
platform: google colab
code
```python
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("TheBloke/Llama-2-7B-GGML")
```
error
```
Downloading (…)lve/main/config.json: 100%
29.0/29.0 [00:00<00:00, 869B/s]
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
[<ipython-input-1-ab811f62978d>](https://localhost:8080/#) in <cell line: 3>()
1 # Load model directly
2 from transformers import AutoModel
----> 3 model = AutoModel.from_pretrained("TheBloke/Llama-2-7B-GGML")
1 frames
[/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)
2968 )
2969 else:
-> 2970 raise EnvironmentError(
2971 f"{pretrained_model_name_or_path} does not appear to have a file named"
2972 f" {_add_variant(WEIGHTS_NAME, variant)}, {TF2_WEIGHTS_NAME}, {TF_WEIGHTS_NAME} or"
OSError: TheBloke/Llama-2-7B-GGML does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.
```
2. trial
```python
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="TheBloke/Llama-2-7B-GGML")
```
error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-2-ca57a68ffa3e>](https://localhost:8080/#) in <cell line: 4>()
2 from transformers import pipeline
3
----> 4 pipe = pipeline("text-generation", model="TheBloke/Llama-2-7B-GGML")
1 frames
[/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)
280 for class_name, trace in all_traceback.items():
281 error += f"while loading with {class_name}, an error is thrown:\n{trace}\n"
--> 282 raise ValueError(
283 f"Could not load model {model} with any of the following classes: {class_tuple}. See the original errors:\n\n{error}\n"
284 )
ValueError: Could not load model TheBloke/Llama-2-7B-GGML with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>, <class 'transformers.models.auto.modeling_tf_auto.TFAutoModelForCausalLM'>). See the original errors:
while loading with AutoModelForCausalLM, an error is thrown:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 269, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 565, in from_pretrained
return model_class.from_pretrained(
File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 2970, in from_pretrained
raise EnvironmentError(
OSError: TheBloke/Llama-2-7B-GGML does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack.
while loading with TFAutoModelForCausalLM, an error is thrown:
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py", line 269, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py", line 568, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.llama.configuration_llama.LlamaConfig'> for this kind of AutoModel: TFAutoModelForCausalLM.
Model type should be one of BertConfig, CamembertConfig, CTRLConfig, GPT2Config, GPT2Config, GPTJConfig, OpenAIGPTConfig, OPTConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoFormerConfig, TransfoXLConfig, XGLMConfig, XLMConfig, XLMRobertaConfig, XLNetConfig.
```
### Who can help?
@ArthurZucker
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
trying to use it for implementing ToT framework
### Expected behavior
Model loaded
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26727/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26726 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26726/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26726/comments | https://api.github.com/repos/huggingface/transformers/issues/26726/events | https://github.com/huggingface/transformers/issues/26726 | 1,936,859,866 | I_kwDOCUB6oc5zciLa | 26,726 | Tutorial on implementing tree of thoughts(ToT) framework using a model | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"cc @gante @patrickvonplaten @MKhalusova who have been working on a rework of our generation docs!",
"Hi @rajveer43 👋 My apologies for the delayed response, I am still catching up on notifications from my recent holidays 🤗 \r\n\r\nWe would love to host a comprehensive tutorial about Tree of Thoughts! My suggestion would be to:\r\n1. Write a community blog post with the comprehensive tutorial (instructions on how to do it [here](https://huggingface.co/spaces/blog-explorers/README); Example of a high-quality community blog post [here](https://huggingface.co/blog/AmelieSchreiber/protein-binding-partners-with-esm2)). I'd be happy to review it if you're interested!\r\n2. We amplify it on social media, to expand its reach\r\n3. On the yet-to-be-created \"advanced generation use cases\" documentation page in `transformers`, we would add a very short demo, linking back to your blog post\r\n\r\nWhat do you think? 🤗 \r\n",
"> Hi @rajveer43 👋 My apologies for the delayed response, I am still catching up on notifications from my recent holidays 🤗\r\n> \r\n> We would love to host a comprehensive tutorial about Tree of Thoughts! My suggestion would be to:\r\n> \r\n> 1. Write a community blog post with the comprehensive tutorial (instructions on how to do it [here](https://huggingface.co/spaces/blog-explorers/README); Example of a high-quality community blog post [here](https://huggingface.co/blog/AmelieSchreiber/protein-binding-partners-with-esm2)). I'd be happy to review it if you're interested!\r\n> 2. We amplify it on social media, to expand its reach\r\n> 3. On the yet-to-be-created \"advanced generation use cases\" documentation page in `transformers`, we would add a very short demo, linking back to your blog post\r\n> \r\n> What do you think? 🤗\r\n\r\n@gante I am also excited to see a demo of Tree of Thoughts added to the \"advanced generation use cases\" documentation page in Transformers. I think this will be a valuable resource for the community.\r\n\r\nI would be happy to write a comprehensive tutorial about Tree of Thoughts for the Hugging Face community blog post. I will try my best to make it as informative and helpful as possible, and I will be sure to include instructions on how to use it, as well as examples of its use cases.\r\n\r\n\r\nWould you guide me on which model is best suited for it?.",
"Feel free to ping me for the blog post PR review (in addition to @gante ).",
"@rajveer43 if you have positive results with a 7B model, preferably a 7B model whose access is fully open (e.g. Llama 2 is NOT fully open, as it requires filling in a form), then that would be my suggestion. 7B models can be loaded by most people :) \r\n\r\nIf you have no model preference, then I'd like to point to our [Zephyr model](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha), or to have a look in the [LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)",
"\r\n> @rajveer43 if you have positive results with a 7B model, preferably a 7B model whose access is fully open (e.g. Llama 2 is NOT fully open, as it requires filling in a form), then that would be my suggestion. 7B models can be loaded by most people :)\r\n> \r\n> If you have no model preference, then I'd like to point to our [Zephyr model](https://huggingface.co/HuggingFaceH4/zephyr-7b-alpha), or to have a look in the [LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)\r\n\r\n7B version will be appropriate, There are basically three tasks of `ToT`\r\n\r\n1. Game of 24\r\n2. Creative writing \r\n3. crosswords\r\n\r\nthe model card state that \r\n\r\n\r\nso using Zephyr will not be that much useful. some other model like mistral or [Fuyu](https://huggingface.co/adept/fuyu-8b)\r\nmay be a better choice. \r\n\r\nthe task in `ToT` is type of `Text Generation and `question answering`\r\n",
"this is still under development\r\n",
"@gante where should be the location of the tutorial?",
"> @gante where should be the location of the tutorial?\r\nBased on earlier discussion, it should be in a community blog post. More context and instructions in the comment above: https://github.com/huggingface/transformers/issues/26726#issuecomment-1776906537",
"> > @gante where should be the location of the tutorial?\r\n> > Based on earlier discussion, it should be in a community blog post. More context and instructions in the comment above: [#26726 (comment)](https://github.com/huggingface/transformers/issues/26726#issuecomment-1776906537)\r\n\r\nI shoul target [Blog](https://github.com/huggingface/blog) repository for the same okay got it.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"under work!"
] | 1,697 | 1,708 | null | CONTRIBUTOR | null | ### Feature request
Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, Tree of Thoughts (ToT), which generalizes over the popular Chain of Thought approach to prompting language models, and enables exploration over coherent units of text (thoughts) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices. Our experiments show that ToT significantly enhances language models' problem-solving abilities on three novel tasks requiring non-trivial planning or search: Game of 24, Creative Writing, and Mini Crosswords. For instance, in Game of 24, while GPT-4 with chain-of-thought prompting only solved 4% of tasks, our method achieved a success rate of 74%. Code repo with all prompts
### Motivation
a comprehensive tutorial on implementing tree of thoughts using any open source model will give users more understanding about it.
### Your contribution
https://github.com/princeton-nlp/tree-of-thought-llm
https://arxiv.org/abs/2305.10601 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26726/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26726/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26725 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26725/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26725/comments | https://api.github.com/repos/huggingface/transformers/issues/26725/events | https://github.com/huggingface/transformers/issues/26725 | 1,936,642,380 | I_kwDOCUB6oc5zbtFM | 26,725 | flash attention 2.0 for internLM | {
"login": "MaxLEAF3824",
"id": 51812574,
"node_id": "MDQ6VXNlcjUxODEyNTc0",
"avatar_url": "https://avatars.githubusercontent.com/u/51812574?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MaxLEAF3824",
"html_url": "https://github.com/MaxLEAF3824",
"followers_url": "https://api.github.com/users/MaxLEAF3824/followers",
"following_url": "https://api.github.com/users/MaxLEAF3824/following{/other_user}",
"gists_url": "https://api.github.com/users/MaxLEAF3824/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MaxLEAF3824/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MaxLEAF3824/subscriptions",
"organizations_url": "https://api.github.com/users/MaxLEAF3824/orgs",
"repos_url": "https://api.github.com/users/MaxLEAF3824/repos",
"events_url": "https://api.github.com/users/MaxLEAF3824/events{/privacy}",
"received_events_url": "https://api.github.com/users/MaxLEAF3824/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @younesbelkada ",
"Hi @MaxLEAF3824 \r\n\r\nInternLM is now using the same architecture as Llama as you can see here: https://huggingface.co/internlm/internlm-7b/discussions/4/files \r\n\r\nWhile waiting for that PR to be merged, you can benefit from FA-2 + intern-LM with the snippet below:\r\n\r\n```python\r\n# pip install flash-attn --no-build-isolation\r\n\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\r\n \"internlm/internlm-7b\",\r\n use_flash_attention_2=True,\r\n torch_dtype=torch.float16,\r\n low_cpu_mem_usage=True,\r\n revision=\"refs/pr/4\"\r\n)\r\n```",
"cc @Rocketknight1 do you have an idea why https://huggingface.co/internlm/internlm-7b/discussions/4/files did not get merged yet 🙏 ",
"Hi @younesbelkada, we're still waiting on approval from the InternLM team!",
"OK thanks ! Hopefully it will get merged soon",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,700 | 1,700 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26725/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26724 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26724/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26724/comments | https://api.github.com/repos/huggingface/transformers/issues/26724/events | https://github.com/huggingface/transformers/issues/26724 | 1,936,613,501 | I_kwDOCUB6oc5zbmB9 | 26,724 | Trainer Stuck at 0% Progress during Training on Multi-GPU Setup | {
"login": "ZYM66",
"id": 61892155,
"node_id": "MDQ6VXNlcjYxODkyMTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/61892155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZYM66",
"html_url": "https://github.com/ZYM66",
"followers_url": "https://api.github.com/users/ZYM66/followers",
"following_url": "https://api.github.com/users/ZYM66/following{/other_user}",
"gists_url": "https://api.github.com/users/ZYM66/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZYM66/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZYM66/subscriptions",
"organizations_url": "https://api.github.com/users/ZYM66/orgs",
"repos_url": "https://api.github.com/users/ZYM66/repos",
"events_url": "https://api.github.com/users/ZYM66/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZYM66/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 5616426447,
"node_id": "LA_kwDOCUB6oc8AAAABTsPdzw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/solved",
"name": "solved",
"color": "B1D6DC",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Add:I can stop it sometime and raise this exception\r\n File \"/home/dl/zym/llamaJP/TestUseContinuePretrainLlama.py\", line 221, in <module>\r\n train()\r\n File \"/home/dl/zym/llamaJP/TestUseContinuePretrainLlama.py\", line 215, in train\r\n trainer.train()\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1506, in train\r\n return inner_training_loop(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1801, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/home/dl/zym/llamaJP/TestUseContinuePretrainLlama.py\", line 104, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 2673, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1518, in _wrapped_call_impl\r\n return self._call_impl(*args, **kwargs)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/nn/modules/module.py\", line 1527, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py\", line 185, in forward\r\n outputs = self.parallel_apply(replicas, inputs, module_kwargs)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/nn/parallel/data_parallel.py\", line 200, in parallel_apply\r\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/nn/parallel/parallel_apply.py\", line 102, in parallel_apply\r\n thread.join()\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/threading.py\", line 1096, in join\r\n self._wait_for_tstate_lock()\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/threading.py\", line 1116, in _wait_for_tstate_lock\r\n if lock.acquire(block, timeout):\r\nKeyboardInterrupt\r\n",
"Most probably something related to GPU communication. I think we had something similar here: #24735. ",
"@ZYM66 does the training run fine on a single GPU setup? just run `CUDA_VISIBLE_DEVICES=0 yourscript.py`",
"> @ZYM66 does the training run fine on a single GPU setup? just run `CUDA_VISIBLE_DEVICES=0 yourscript.py`\r\n\r\nIt works well when I am using my CPU\r\n<img width=\"1134\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/61892155/d01c0502-c909-4c5b-a1d9-a6371c7bf55f\">\r\n\r\nBut I can't run it on a single GPU, because it will run out of memory\r\n",
"> @ZYM66 does the training run fine on a single GPU setup? just run `CUDA_VISIBLE_DEVICES=0 yourscript.py`\r\n\r\n@younesbelkada \r\nI tried my script on GPT2 with a single GPU It looks well!\r\n<img width=\"1108\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/61892155/2876aa34-a513-4e90-98e7-b642e66de334\">\r\n\r\nbut\r\nwhen I am using Mult-GPUs, the trainer still stuck at 0%\r\n<img width=\"1100\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/61892155/44564692-7a6c-4e12-9726-8db0ddb75430\">\r\n",
"@ZYM66 what happens when you use `accelerate launch --multi_gpu --num_processes {number_of_gpus} my script` instead? (basically, are you wanting model parallelism or data parallelism here?)",
"> @ZYM66 what happens when you use `accelerate launch --multi_gpu --num_processes {number_of_gpus} my script` instead? (basically, are you wanting model parallelism or data parallelism here?)\r\n \r\n@muellerzr \r\nI tried this and found it still stuck here with no errors:\r\n<img width=\"819\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/61892155/4ab6c3a5-ba20-40c4-b065-d57c1f1f7eaf\">\r\nI want to use my deepspeed config for Data Parallelism, but I don't use my config for debugging.\r\n",
"To pinpoint the location of the program blockage, we modified the source code of `~/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py`. In the train() function call, we added the following debug information.\r\n\r\n```python\r\n if delay_optimizer_creation:\r\n if use_accelerator_prepare:\r\n self.model = self.accelerator.prepare(self.model)\r\n self.create_optimizer_and_scheduler(num_training_steps=max_steps)\r\n\r\n # prepare using `accelerator` prepare\r\n if use_accelerator_prepare:\r\n self.model.train()\r\n if hasattr(self.lr_scheduler, \"step\"):\r\n print(\"4.0\")\r\n if self.use_apex:\r\n print(\"4.1\")\r\n model = self.accelerator.prepare(self.model)\r\n print(\"4.2\")\r\n else:\r\n print(\"4.3\")\r\n model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)\r\n print(\"4.4\")\r\n else:\r\n print(\"4.5\")\r\n # to handle cases wherein we pass \"DummyScheduler\" such as when it is specified in DeepSpeed config.\r\n model, self.optimizer, self.lr_scheduler = self.accelerator.prepare(\r\n self.model, self.optimizer, self.lr_scheduler\r\n )\r\n print(\"4.6\")\r\n print(\"5\")\r\n\r\n if self.is_fsdp_enabled:\r\n self.model = self.model_wrapped = model\r\n print(\"6\")\r\n```\r\nIt is found that the program will only print 4.0 and 4.3, indicating that the following line is blocked.\r\n```python\r\nmodel, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)\r\n```\r\nWe inquired about the following information:\r\nhttps://github.com/pytorch/pytorch/issues/29482\r\nhttps://blog.csdn.net/m0_37426155/article/details/108129952\r\nhttps://espnet.github.io/espnet/espnet2_distributed.html#troubleshooting-for-nccl-with-ethernet-case\r\n\r\nThen some modifications were made to the startup command.\r\n\r\n① \r\n```bash\r\nNCCL_DEBUG=INFO NCCL_IB_DISABLE=1 NCCL_SOCKET_IFNAME=eno accelerate launch ContinuePretrainLlama.py \\\r\n--data_path ./PretrainData/kyoto-train-dev.txt \\\r\n--output_dir ./llama2 \\\r\n--num_train_epochs 1 \\\r\n--bf16 True \\\r\n--peft True \\\r\n--lora_config lora_config.json \\\r\n--logging_steps 10 \\\r\n--model_name_or_path meta-llama/Llama-2-7b-hf\r\n```\r\nIt encountered a new error after starting:\r\n```\r\nncclInternalError: Internal check failed. Last error: Bootstrap : no socket interface found\r\n```\r\n\r\n②\r\n```bash\r\nNCCL_DEBUG=INFO NCCL_IB_DISABLE=1 NCCL_SOCKET_IFNAME=^lo,docker,virbr,vmnet,vboxnet,wl,ww,ppp accelerate launch ContinuePretrainLlama.py \\\r\n--data_path ./PretrainData/kyoto-train-dev.txt \\\r\n--output_dir ./llama2 \\\r\n--num_train_epochs 1 \\\r\n--bf16 True \\\r\n--peft True \\\r\n--lora_config lora_config.json \\\r\n--logging_steps 10 \\\r\n--model_name_or_path meta-llama/Llama-2-7b-hf\r\n```\r\nNo error was reported when running at this time, but it eventually timed out.\r\n```\r\nbefore trainer.train()\r\n4.0\r\n4.3\r\naaa:55300:55300 [3] NCCL INFO cudaDriverVersion 12020\r\naaa:55300:55300 [3] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^lo,docker,virbr,vmnet,vboxnet,wl,ww,ppp\r\naaa:55300:55300 [3] NCCL INFO Bootstrap : Using br0:10.4.11.18<0>\r\naaa:55300:55300 [3] NCCL INFO NET/Plugin : Plugin load (libnccl-net.so) returned 2 : libnccl-net.so: cannot open shared object file: No such file or directory\r\naaa:55300:55300 [3] NCCL INFO NET/Plugin : No plugin found, using internal implementation\r\naaa:55300:56600 [3] NCCL INFO NCCL_IB_DISABLE set by environment to 1.\r\naaa:55300:56600 [3] NCCL INFO NCCL_SOCKET_IFNAME set by environment to ^lo,docker,virbr,vmnet,vboxnet,wl,ww,ppp\r\naaa:55300:56600 [3] NCCL INFO NET/Socket : Using [0]br0:10.4.11.18<0> [1]vnet0:fe80::fc54:ff:fe3f:6ac%vnet0<0>\r\naaa:55300:56600 [3] NCCL INFO Using network Socket\r\naaa:55300:56600 [3] NCCL INFO Setting affinity for GPU 3 to 0fffff,ff000000,0fffffff\r\naaa:55300:56600 [3] NCCL INFO NVLS multicast support is not available on dev 3\r\naaa:55302:56568 [5] NCCL INFO Setting affinity for GPU 5 to ffff,fff00000,00ffffff,f0000000\r\naaa:55302:56568 [5] NCCL INFO NVLS multicast support is not available on dev 5\r\naaa:55301:56583 [4] NCCL INFO Setting affinity for GPU 4 to ffff,fff00000,00ffffff,f0000000\r\naaa:55301:56583 [4] NCCL INFO NVLS multicast support is not available on dev 4\r\naaa:55304:56549 [7] NCCL INFO Setting affinity for GPU 7 to ffff,fff00000,00ffffff,f0000000\r\naaa:55304:56549 [7] NCCL INFO NVLS multicast support is not available on dev 7\r\naaa:55303:56586 [6] NCCL INFO Setting affinity for GPU 6 to ffff,fff00000,00ffffff,f0000000\r\naaa:55303:56586 [6] NCCL INFO NVLS multicast support is not available on dev 6\r\naaa:55298:56548 [1] NCCL INFO Setting affinity for GPU 1 to 0fffff,ff000000,0fffffff\r\naaa:55298:56548 [1] NCCL INFO NVLS multicast support is not available on dev 1\r\naaa:55297:56547 [0] NCCL INFO Setting affinity for GPU 0 to 0fffff,ff000000,0fffffff\r\naaa:55297:56547 [0] NCCL INFO NVLS multicast support is not available on dev 0\r\naaa:55299:56591 [2] NCCL INFO Setting affinity for GPU 2 to 0fffff,ff000000,0fffffff\r\naaa:55299:56591 [2] NCCL INFO NVLS multicast support is not available on dev 2\r\naaa:55299:56591 [2] NCCL INFO Trees [0] 3/-1/-1->2->1 [1] 3/-1/-1->2->1\r\naaa:55299:56591 [2] NCCL INFO P2P Chunksize set to 524288\r\naaa:55298:56548 [1] NCCL INFO Trees [0] 2/-1/-1->1->0 [1] 2/-1/-1->1->0\r\naaa:55297:56547 [0] NCCL INFO Channel 00/02 : 0 1 2 3 4 5 6 7\r\naaa:55298:56548 [1] NCCL INFO P2P Chunksize set to 524288\r\naaa:55304:56549 [7] NCCL INFO Trees [0] -1/-1/-1->7->6 [1] -1/-1/-1->7->6\r\naaa:55297:56547 [0] NCCL INFO Channel 01/02 : 0 1 2 3 4 5 6 7\r\naaa:55303:56586 [6] NCCL INFO Trees [0] 7/-1/-1->6->5 [1] 7/-1/-1->6->5\r\naaa:55304:56549 [7] NCCL INFO P2P Chunksize set to 524288\r\naaa:55297:56547 [0] NCCL INFO Trees [0] 1/-1/-1->0->-1 [1] 1/-1/-1->0->-1\r\naaa:55303:56586 [6] NCCL INFO P2P Chunksize set to 524288\r\naaa:55302:56568 [5] NCCL INFO Trees [0] 6/-1/-1->5->4 [1] 6/-1/-1->5->4\r\naaa:55297:56547 [0] NCCL INFO P2P Chunksize set to 524288\r\naaa:55301:56583 [4] NCCL INFO Trees [0] 5/-1/-1->4->3 [1] 5/-1/-1->4->3\r\naaa:55302:56568 [5] NCCL INFO P2P Chunksize set to 524288\r\naaa:55300:56600 [3] NCCL INFO Trees [0] 4/-1/-1->3->2 [1] 4/-1/-1->3->2\r\naaa:55301:56583 [4] NCCL INFO P2P Chunksize set to 524288\r\naaa:55300:56600 [3] NCCL INFO P2P Chunksize set to 524288\r\naaa:55301:56583 [4] NCCL INFO Channel 00/0 : 4[ce000] -> 5[d1000] via P2P/IPC/read\r\naaa:55297:56547 [0] NCCL INFO Channel 00/0 : 0[4f000] -> 1[52000] via P2P/IPC/read\r\naaa:55301:56583 [4] NCCL INFO Channel 01/0 : 4[ce000] -> 5[d1000] via P2P/IPC/read\r\naaa:55297:56547 [0] NCCL INFO Channel 01/0 : 0[4f000] -> 1[52000] via P2P/IPC/read\r\naaa:55303:56586 [6] NCCL INFO Channel 00/0 : 6[d5000] -> 7[d6000] via P2P/IPC/read\r\naaa:55302:56568 [5] NCCL INFO Channel 00/0 : 5[d1000] -> 6[d5000] via P2P/IPC\r\naaa:55304:56549 [7] NCCL INFO Channel 00 : 7[d6000] -> 0[4f000] via SHM/direct/direct\r\naaa:55304:56549 [7] NCCL INFO Channel 01 : 7[d6000] -> 0[4f000] via SHM/direct/direct\r\naaa:55299:56591 [2] NCCL INFO Channel 00/0 : 2[56000] -> 3[57000] via P2P/IPC/read\r\naaa:55298:56548 [1] NCCL INFO Channel 00/0 : 1[52000] -> 2[56000] via P2P/IPC\r\naaa:55300:56600 [3] NCCL INFO Channel 00 : 3[57000] -> 4[ce000] via SHM/direct/direct\r\naaa:55300:56600 [3] NCCL INFO Channel 01 : 3[57000] -> 4[ce000] via SHM/direct/direct\r\naaa:55303:56586 [6] NCCL INFO Channel 01/0 : 6[d5000] -> 7[d6000] via P2P/IPC/read\r\naaa:55302:56568 [5] NCCL INFO Channel 01/0 : 5[d1000] -> 6[d5000] via P2P/IPC\r\naaa:55298:56548 [1] NCCL INFO Channel 01/0 : 1[52000] -> 2[56000] via P2P/IPC\r\naaa:55299:56591 [2] NCCL INFO Channel 01/0 : 2[56000] -> 3[57000] via P2P/IPC/read\r\naaa:55304:56549 [7] NCCL INFO Connected all rings\r\naaa:55304:56549 [7] NCCL INFO Channel 00/0 : 7[d6000] -> 6[d5000] via P2P/IPC/read\r\naaa:55301:56583 [4] NCCL INFO Connected all rings\r\naaa:55302:56568 [5] NCCL INFO Connected all rings\r\naaa:55303:56586 [6] NCCL INFO Connected all rings\r\naaa:55297:56547 [0] NCCL INFO Connected all rings\r\naaa:55299:56591 [2] NCCL INFO Connected all rings\r\naaa:55298:56548 [1] NCCL INFO Connected all rings\r\naaa:55300:56600 [3] NCCL INFO Connected all rings\r\naaa:55300:56600 [3] NCCL INFO Channel 00/0 : 3[57000] -> 2[56000] via P2P/IPC/read\r\naaa:55304:56549 [7] NCCL INFO Channel 01/0 : 7[d6000] -> 6[d5000] via P2P/IPC/read\r\naaa:55300:56600 [3] NCCL INFO Channel 01/0 : 3[57000] -> 2[56000] via P2P/IPC/read\r\naaa:55301:56583 [4] NCCL INFO Channel 00 : 4[ce000] -> 3[57000] via SHM/direct/direct\r\naaa:55301:56583 [4] NCCL INFO Channel 01 : 4[ce000] -> 3[57000] via SHM/direct/direct\r\naaa:55303:56586 [6] NCCL INFO Channel 00/0 : 6[d5000] -> 5[d1000] via P2P/IPC\r\naaa:55302:56568 [5] NCCL INFO Channel 00/0 : 5[d1000] -> 4[ce000] via P2P/IPC/read\r\naaa:55299:56591 [2] NCCL INFO Channel 00/0 : 2[56000] -> 1[52000] via P2P/IPC\r\naaa:55298:56548 [1] NCCL INFO Channel 00/0 : 1[52000] -> 0[4f000] via P2P/IPC/read\r\naaa:55299:56591 [2] NCCL INFO Channel 01/0 : 2[56000] -> 1[52000] via P2P/IPC\r\naaa:55302:56568 [5] NCCL INFO Channel 01/0 : 5[d1000] -> 4[ce000] via P2P/IPC/read\r\naaa:55303:56586 [6] NCCL INFO Channel 01/0 : 6[d5000] -> 5[d1000] via P2P/IPC\r\naaa:55298:56548 [1] NCCL INFO Channel 01/0 : 1[52000] -> 0[4f000] via P2P/IPC/read\r\naaa:55300:56600 [3] NCCL INFO Connected all trees\r\naaa:55300:56600 [3] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 512 | 512\r\naaa:55300:56600 [3] NCCL INFO 2 coll channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer\r\naaa:55304:56549 [7] NCCL INFO Connected all trees\r\naaa:55304:56549 [7] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 512 | 512\r\naaa:55304:56549 [7] NCCL INFO 2 coll channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer\r\naaa:55303:56586 [6] NCCL INFO Connected all trees\r\naaa:55303:56586 [6] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 512 | 512\r\naaa:55303:56586 [6] NCCL INFO 2 coll channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer\r\naaa:55302:56568 [5] NCCL INFO Connected all trees\r\naaa:55302:56568 [5] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 512 | 512\r\naaa:55302:56568 [5] NCCL INFO 2 coll channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer\r\naaa:55301:56583 [4] NCCL INFO Connected all trees\r\naaa:55301:56583 [4] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 512 | 512\r\naaa:55301:56583 [4] NCCL INFO 2 coll channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer\r\naaa:55299:56591 [2] NCCL INFO Connected all trees\r\naaa:55299:56591 [2] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 512 | 512\r\naaa:55299:56591 [2] NCCL INFO 2 coll channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer\r\naaa:55298:56548 [1] NCCL INFO Connected all trees\r\naaa:55298:56548 [1] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 512 | 512\r\naaa:55298:56548 [1] NCCL INFO 2 coll channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer\r\naaa:55297:56547 [0] NCCL INFO Connected all trees\r\naaa:55297:56547 [0] NCCL INFO threadThresholds 8/8/64 | 64/8/64 | 512 | 512\r\naaa:55297:56547 [0] NCCL INFO 2 coll channels, 0 nvls channels, 2 p2p channels, 2 p2p channels per peer\r\naaa:55298:56548 [1] NCCL INFO comm 0xfa8d770 rank 1 nranks 8 cudaDev 1 busId 52000 commId 0xb081437536642957 - Init COMPLETE\r\naaa:55303:56586 [6] NCCL INFO comm 0xffbdc10 rank 6 nranks 8 cudaDev 6 busId d5000 commId 0xb081437536642957 - Init COMPLETE\r\naaa:55302:56568 [5] NCCL INFO comm 0xa225f30 rank 5 nranks 8 cudaDev 5 busId d1000 commId 0xb081437536642957 - Init COMPLETE\r\naaa:55299:56591 [2] NCCL INFO comm 0x1009c7e0 rank 2 nranks 8 cudaDev 2 busId 56000 commId 0xb081437536642957 - Init COMPLETE\r\naaa:55304:56549 [7] NCCL INFO comm 0x84f2140 rank 7 nranks 8 cudaDev 7 busId d6000 commId 0xb081437536642957 - Init COMPLETE\r\naaa:55297:56547 [0] NCCL INFO comm 0x15687100 rank 0 nranks 8 cudaDev 0 busId 4f000 commId 0xb081437536642957 - Init COMPLETE\r\naaa:55301:56583 [4] NCCL INFO comm 0x1170e1b0 rank 4 nranks 8 cudaDev 4 busId ce000 commId 0xb081437536642957 - Init COMPLETE\r\naaa:55300:56600 [3] NCCL INFO comm 0x16f8ac10 rank 3 nranks 8 cudaDev 3 busId 57000 commId 0xb081437536642957 - Init COMPLETE\r\n[E ProcessGroupNCCL.cpp:474] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800111 milliseconds before timing out.\r\nTraceback (most recent call last):\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 140, in <module>\r\nTraceback (most recent call last):\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 140, in <module>\r\n train()\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 133, in train\r\n train()\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 133, in train\r\n trainer.train()\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1506, in train\r\n trainer.train()\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1506, in train\r\n return inner_training_loop(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1639, in _inner_training_loop\r\n return inner_training_loop(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1639, in _inner_training_loop\r\n model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1284, in prepare\r\n model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1284, in prepare\r\n result = tuple(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1285, in <genexpr>\r\n result = tuple(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1285, in <genexpr>\r\n self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1090, in _prepare_one\r\n self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1090, in _prepare_one\r\n return self.prepare_model(obj, device_placement=device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1429, in prepare_model\r\n return self.prepare_model(obj, device_placement=device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1429, in prepare_model\r\n model = torch.nn.parallel.DistributedDataParallel(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/nn/parallel/distributed.py\", line 795, in __init__\r\n model = torch.nn.parallel.DistributedDataParallel(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/nn/parallel/distributed.py\", line 795, in __init__\r\n _verify_param_shape_across_processes(self.process_group, parameters)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/distributed/utils.py\", line 265, in _verify_param_shape_across_processes\r\n _verify_param_shape_across_processes(self.process_group, parameters)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/distributed/utils.py\", line 265, in _verify_param_shape_across_processes\r\n return dist._verify_params_across_processes(process_group, tensors, logger)\r\nRuntimeError: DDP expects same model across all ranks, but Rank 3 has 448 params, while rank 0 has inconsistent 0 params.\r\n return dist._verify_params_across_processes(process_group, tensors, logger)\r\nRuntimeError: DDP expects same model across all ranks, but Rank 4 has 448 params, while rank 0 has inconsistent 0 params.\r\nTraceback (most recent call last):\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 140, in <module>\r\nTraceback (most recent call last):\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 140, in <module>\r\n train()\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 133, in train\r\n train()\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 133, in train\r\n trainer.train()\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1506, in train\r\n trainer.train()\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1506, in train\r\n return inner_training_loop(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1639, in _inner_training_loop\r\n return inner_training_loop(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1639, in _inner_training_loop\r\n model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1284, in prepare\r\n model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1284, in prepare\r\n result = tuple(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1285, in <genexpr>\r\n result = tuple(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1285, in <genexpr>\r\n self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1090, in _prepare_one\r\n self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1090, in _prepare_one\r\n return self.prepare_model(obj, device_placement=device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1429, in prepare_model\r\n return self.prepare_model(obj, device_placement=device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1429, in prepare_model\r\n model = torch.nn.parallel.DistributedDataParallel(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/nn/parallel/distributed.py\", line 795, in __init__\r\n model = torch.nn.parallel.DistributedDataParallel(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/nn/parallel/distributed.py\", line 795, in __init__\r\n _verify_param_shape_across_processes(self.process_group, parameters)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/distributed/utils.py\", line 265, in _verify_param_shape_across_processes\r\n _verify_param_shape_across_processes(self.process_group, parameters)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/distributed/utils.py\", line 265, in _verify_param_shape_across_processes\r\nreturn dist._verify_params_across_processes(process_group, tensors, logger)\r\nRuntimeError: DDP expects same model across all ranks, but Rank 2 has 448 params, while rank 0 has inconsistent 0 params.\r\n return dist._verify_params_across_processes(process_group, tensors, logger)\r\nRuntimeError: DDP expects same model across all ranks, but Rank 5 has 448 params, while rank 0 has inconsistent 0 params.\r\n[E ProcessGroupNCCL.cpp:474] [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800283 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:474] [Rank 6] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800289 milliseconds before timing out.\r\nTraceback (most recent call last):\r\nTraceback (most recent call last):\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 140, in <module>\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 140, in <module>\r\n train()train()\r\n\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 133, in train\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 133, in train\r\n trainer.train()\r\n trainer.train() File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1506, in train\r\n\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1506, in train\r\n return inner_training_loop(return inner_training_loop(\r\n\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1639, in _inner_training_loop\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1639, in _inner_training_loop\r\n model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)\r\n model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1284, in prepare\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1284, in prepare\r\n result = tuple(result = tuple(\r\n\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1285, in <genexpr>\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1285, in <genexpr>\r\n self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1090, in _prepare_one\r\nself._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1090, in _prepare_one\r\n return self.prepare_model(obj, device_placement=device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1429, in prepare_model\r\nreturn self.prepare_model(obj, device_placement=device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1429, in prepare_model\r\n model = torch.nn.parallel.DistributedDataParallel(model = torch.nn.parallel.DistributedDataParallel(\r\n\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/nn/parallel/distributed.py\", line 795, in __init__\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/nn/parallel/distributed.py\", line 795, in __init__\r\n _verify_param_shape_across_processes(self.process_group, parameters)_verify_param_shape_across_processes(self.process_group, parameters)\r\n\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/distributed/utils.py\", line 265, in _verify_param_shape_across_processes\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/distributed/utils.py\", line 265, in _verify_param_shape_across_processes\r\n return dist._verify_params_across_processes(process_group, tensors, logger)return dist._verify_params_across_processes(process_group, tensors, logger)\r\n\r\nRuntimeError: RuntimeErrorDDP expects same model across all ranks, but Rank 7 has 448 params, while rank 0 has inconsistent 0 params.: \r\nDDP expects same model across all ranks, but Rank 0 has 448 params, while rank 1 has inconsistent 0 params.\r\nTraceback (most recent call last):\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 140, in <module>\r\nTraceback (most recent call last):\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 140, in <module>\r\n train()\r\n File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 133, in train\r\n train() \r\ntrainer.train() File \"/home/dl/lzf/llamaJP/ContinuePretrainLlama.py\", line 133, in train\r\n\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1506, in train\r\n trainer.train()\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1506, in train\r\n return inner_training_loop(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1639, in _inner_training_loop\r\n return inner_training_loop(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/transformers/trainer.py\", line 1639, in _inner_training_loop\r\n model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1284, in prepare\r\n model, self.optimizer = self.accelerator.prepare(self.model, self.optimizer)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1284, in prepare\r\n result = tuple(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1285, in <genexpr>\r\n result = tuple(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1285, in <genexpr>\r\n self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1090, in _prepare_one\r\n self._prepare_one(obj, first_pass=True, device_placement=d) for obj, d in zip(args, device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1090, in _prepare_one\r\n return self.prepare_model(obj, device_placement=device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1429, in prepare_model\r\n return self.prepare_model(obj, device_placement=device_placement)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/accelerator.py\", line 1429, in prepare_model\r\n model = torch.nn.parallel.DistributedDataParallel(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/nn/parallel/distributed.py\", line 795, in __init__\r\n model = torch.nn.parallel.DistributedDataParallel(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/nn/parallel/distributed.py\", line 795, in __init__\r\n _verify_param_shape_across_processes(self.process_group, parameters)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/distributed/utils.py\", line 265, in _verify_param_shape_across_processes\r\n _verify_param_shape_across_processes(self.process_group, parameters) \r\nreturn dist._verify_params_across_processes(process_group, tensors, logger)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/distributed/utils.py\", line 265, in _verify_param_shape_across_processes\r\nRuntimeError: DDP expects same model across all ranks, but Rank 6 has 448 params, while rank 0 has inconsistent 0 params.\r\n return dist._verify_params_across_processes(process_group, tensors, logger)\r\nRuntimeError: DDP expects same model across all ranks, but Rank 1 has 448 params, while rank 2 has inconsistent 0 params.\r\n[E ProcessGroupNCCL.cpp:474] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800522 milliseconds before timing out.\r\naaa:55297:56614 [0] NCCL INFO [Service thread] Connection closed by localRank 0\r\n[E ProcessGroupNCCL.cpp:474] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800530 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:474] [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800591 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:474] [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800619 milliseconds before timing out.\r\n[E ProcessGroupNCCL.cpp:474] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800651 milliseconds before timing out.\r\naaa:55297:55856 [0] NCCL INFO comm 0x15687100 rank 0 nranks 8 cudaDev 0 busId 4f000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:488] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:494] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:915] [Rank 0] NCCL watchdog thread terminated with exception: [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800522 milliseconds before timing out.\r\naaa:55299:56611 [2] NCCL INFO [Service thread] Connection closed by localRank 2\r\naaa:55301:56617 [4] NCCL INFO [Service thread] Connection closed by localRank 4\r\naaa:55299:55850 [0] NCCL INFO comm 0x1009c7e0 rank 2 nranks 8 cudaDev 2 busId 56000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:488] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:494] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:915] [Rank 2] NCCL watchdog thread terminated with exception: [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800111 milliseconds before timing out.\r\naaa:55303:56613 [6] NCCL INFO [Service thread] Connection closed by localRank 6\r\naaa:55301:55852 [0] NCCL INFO comm 0x1170e1b0 rank 4 nranks 8 cudaDev 4 busId ce000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:488] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:494] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:915] [Rank 4] NCCL watchdog thread terminated with exception: [Rank 4] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800283 milliseconds before timing out.\r\naaa:55303:55853 [0] NCCL INFO comm 0xffbdc10 rank 6 nranks 8 cudaDev 6 busId d5000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:488] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:494] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:915] [Rank 6] NCCL watchdog thread terminated with exception: [Rank 6] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800289 milliseconds before timing out.\r\naaa:55298:56616 [1] NCCL INFO [Service thread] Connection closed by localRank 1\r\naaa:55300:56618 [3] NCCL INFO [Service thread] Connection closed by localRank 3\r\naaa:55298:55845 [0] NCCL INFO comm 0xfa8d770 rank 1 nranks 8 cudaDev 1 busId 52000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:488] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:494] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:915] [Rank 1] NCCL watchdog thread terminated with exception: [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800651 milliseconds before timing out.\r\naaa:55302:56615 [5] NCCL INFO [Service thread] Connection closed by localRank 5\r\naaa:55300:55857 [0] NCCL INFO comm 0x16f8ac10 rank 3 nranks 8 cudaDev 3 busId 57000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:488] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:494] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:915] [Rank 3] NCCL watchdog thread terminated with exception: [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800530 milliseconds before timing out.\r\naaa:55304:56612 [7] NCCL INFO [Service thread] Connection closed by localRank 7\r\naaa:55302:55864 [0] NCCL INFO comm 0xa225f30 rank 5 nranks 8 cudaDev 5 busId d1000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:488] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:494] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:915] [Rank 5] NCCL watchdog thread terminated with exception: [Rank 5] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800591 milliseconds before timing out.\r\naaa:55304:55848 [0] NCCL INFO comm 0x84f2140 rank 7 nranks 8 cudaDev 7 busId d6000 - Abort COMPLETE\r\n[E ProcessGroupNCCL.cpp:488] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data.\r\n[E ProcessGroupNCCL.cpp:494] To avoid data inconsistency, we are taking the entire process down.\r\n[E ProcessGroupNCCL.cpp:915] [Rank 7] NCCL watchdog thread terminated with exception: [Rank 7] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=1, OpType=ALLGATHER, NumelIn=1, NumelOut=8, Timeout(ms)=1800000) ran for 1800619 milliseconds before timing out.\r\n[2023-10-12 21:59:50,478] torch.distributed.elastic.multiprocessing.api: [ERROR] failed (exitcode: -6) local_rank: 0 (pid: 55297) of binary: /root/miniconda3/envs/LLM/bin/python\r\nTraceback (most recent call last):\r\n File \"/root/miniconda3/envs/LLM/bin/accelerate\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py\", line 47, in main\r\n args.func(args)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/commands/launch.py\", line 977, in launch_command\r\n multi_gpu_launcher(args)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/accelerate/commands/launch.py\", line 646, in multi_gpu_launcher\r\n distrib_run.run(args)\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/distributed/run.py\", line 797, in run\r\n elastic_launch(\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/distributed/launcher/api.py\", line 134, in __call__\r\n return launch_agent(self._config, self._entrypoint, list(args))\r\n File \"/root/miniconda3/envs/LLM/lib/python3.10/site-packages/torch/distributed/launcher/api.py\", line 264, in launch_agent\r\n raise ChildFailedError(\r\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError: \r\n======================================================\r\nContinuePretrainLlama.py FAILED\r\n------------------------------------------------------\r\nFailures:\r\n[1]:\r\n time : 2023-10-12_21:59:50\r\n host : aaa\r\n rank : 1 (local_rank: 1)\r\n exitcode : -6 (pid: 55298)\r\n error_file: <N/A>\r\n traceback : Signal 6 (SIGABRT) received by PID 55298\r\n[2]:\r\n time : 2023-10-12_21:59:50\r\n host : aaa\r\n rank : 2 (local_rank: 2)\r\n exitcode : -6 (pid: 55299)\r\n error_file: <N/A>\r\n traceback : Signal 6 (SIGABRT) received by PID 55299\r\n[3]:\r\n time : 2023-10-12_21:59:50\r\n host : aaa\r\n rank : 3 (local_rank: 3)\r\n exitcode : -6 (pid: 55300)\r\n error_file: <N/A>\r\n traceback : Signal 6 (SIGABRT) received by PID 55300\r\n[4]:\r\n time : 2023-10-12_21:59:50\r\n host : aaa\r\n rank : 4 (local_rank: 4)\r\n exitcode : -6 (pid: 55301)\r\n error_file: <N/A>\r\n traceback : Signal 6 (SIGABRT) received by PID 55301\r\n[5]:\r\n time : 2023-10-12_21:59:50\r\n host : aaa\r\n rank : 5 (local_rank: 5)\r\n exitcode : -6 (pid: 55302)\r\n error_file: <N/A>\r\n traceback : Signal 6 (SIGABRT) received by PID 55302\r\n[6]:\r\n time : 2023-10-12_21:59:50\r\n host : aaa\r\n rank : 6 (local_rank: 6)\r\n exitcode : -6 (pid: 55303)\r\n error_file: <N/A>\r\n traceback : Signal 6 (SIGABRT) received by PID 55303\r\n[7]:\r\n time : 2023-10-12_21:59:50\r\n host : aaa\r\n rank : 7 (local_rank: 7)\r\n exitcode : -6 (pid: 55304)\r\n error_file: <N/A>\r\n traceback : Signal 6 (SIGABRT) received by PID 55304\r\n------------------------------------------------------\r\nRoot Cause (first observed failure):\r\n[0]:\r\n time : 2023-10-12_21:59:50\r\n host : aaa\r\n rank : 0 (local_rank: 0)\r\n exitcode : -6 (pid: 55297)\r\n error_file: <N/A>\r\n traceback : Signal 6 (SIGABRT) received by PID 55297\r\n=====================================================\r\n```\r\n\r\n",
"What is your config when doing `accelerate launch`? (Run `accelerate env`)\r\n\r\nEdit: Actually we do find the issue. Not every rank is getting the same prepared model",
"\r\n\r\n\r\n> What is your config when doing `accelerate launch`? (Run `accelerate env`)\r\n> \r\n> Edit: Actually we do find the issue. Not every rank is getting the same prepared model\r\n\r\n- `Accelerate` version: 0.23.0\r\n- Platform: Linux-4.15.0-213-generic-x86_64-with-glibc2.27\r\n- Python version: 3.10.13\r\n- Numpy version: 1.24.1\r\n- PyTorch version (GPU?): 2.1.0+cu121 (True)\r\n- PyTorch XPU available: False\r\n- PyTorch NPU available: False\r\n- System RAM: 1007.56 GB\r\n- GPU type: NVIDIA A800 80GB PCIe\r\n- `Accelerate` default config:\r\n - compute_environment: LOCAL_MACHINE\r\n - distributed_type: MULTI_GPU\r\n - mixed_precision: bf16\r\n - use_cpu: False\r\n - debug: False\r\n - num_processes: 8\r\n - machine_rank: 0\r\n - num_machines: 1\r\n - gpu_ids: all\r\n - rdzv_backend: static\r\n - same_network: True\r\n - main_training_function: main\r\n - downcast_bf16: no\r\n - tpu_use_cluster: False\r\n - tpu_use_sudo: False\r\n - tpu_env: []",
"I encounter the same issue. Anyone got a solution?",
"> I encounter the same issue. Anyone got a solution?\r\n\r\nIt seems work out there, https://github.com/NVIDIA/nccl/issues/1027\r\nYou should add NCCL_P2P_DISABLE=1 before your command.\r\n",
"If the NCCL flag fixes, would be the same as #26814! ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"这是来自QQ邮箱的假期自动回复邮件。你好,我最近正在休假中,无法亲自回复你的邮件。我将在看见后,尽快给你回复。",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"The latest accelerate will automatically disable P2P if a card doesn't support it. Marking as solved unless someone else has issues",
"I'm facing similar issue. The training is fine with one GPU, but will stuck with multiple GPUs. \r\nSetting NCCL_P2P_DISABLE=1 will raise error:\r\n```\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [128,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\n../aten/src/ATen/native/cuda/Indexing.cu:1146: indexSelectLargeIndex: block: [150,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed.\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/jialin/iterative_lora/llm.py\", line 309, in <module>\r\n trainer.train()\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/transformers/trainer.py\", line 1553, in train\r\n return inner_training_loop(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/transformers/trainer.py\", line 1835, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/transformers/trainer.py\", line 2679, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/transformers/trainer.py\", line 2704, in compute_loss\r\n outputs = model(**inputs)\r\n ^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/torch/nn/parallel/data_parallel.py\", line 171, in forward\r\n outputs = self.parallel_apply(replicas, inputs, kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/torch/nn/parallel/data_parallel.py\", line 181, in parallel_apply\r\n return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/torch/nn/parallel/parallel_apply.py\", line 89, in parallel_apply\r\n output.reraise()\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/torch/_utils.py\", line 644, in reraise\r\n raise exception\r\nRuntimeError: Caught RuntimeError in replica 1 on device 1.\r\nOriginal Traceback (most recent call last):\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/torch/nn/parallel/parallel_apply.py\", line 64, in _worker\r\n output = module(*input, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/transformers/models/opt/modeling_opt.py\", line 944, in forward\r\n outputs = self.model.decoder(\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/transformers/models/opt/modeling_opt.py\", line 710, in forward\r\n layer_outputs = decoder_layer(\r\n ^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/transformers/models/opt/modeling_opt.py\", line 330, in forward\r\n hidden_states, self_attn_weights, present_key_value = self.self_attn(\r\n ^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/anaconda3/envs/jialin_pytorch/lib/python3.11/site-packages/transformers/models/opt/modeling_opt.py\", line 225, in forward\r\n attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min, device=attn_weights.device)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: CUDA error: device-side assert triggered\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```\r\nwhat should I do now",
"这是来自QQ邮箱的假期自动回复邮件。你好,我最近正在休假中,无法亲自回复你的邮件。我将在看见后,尽快给你回复。",
"I finally solved this by disabling ACS in bios, ref https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/troubleshooting.html#pci-access-control-services-acs. Changing nvidia driver and cuda version doesn't help.\r\n\r\nThis test is very helpful. https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/troubleshooting.html#gpu-to-gpu-communication",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am working on a cluster with SLURM - it took me forever to resolve this issue; however, when I changed the ddp backend to `--ddp_backend gloo`, it finally worked!",
"I'm on an 8x 3090 Supermicro system.\r\n\r\nI was on driver version 545 and had the bug unless I set NCCL_P2P_DISABLE=1.\r\n\r\nDowngrading to 535 makes things work without having to set NCCL_P2P_DISABLE=1"
] | 1,696 | 1,707 | 1,706 | NONE | null | ### System Info
- `transformers` version: 4.33.3
- Platform: Linux-4.15.0-213-generic-x86_64-with-glibc2.27
- Python version: 3.10.13
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.3.3
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Machine: 8 x A800 GPUs
### Who can help?
@ArthurZucker
@younesbelkada
@pacman100
@muellerzr
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I am running the following script:
```
python ContinuePretrainLlama.py \
--data_path ./PretrainData/kyoto-train.txt \
--output_dir ./llama2 \
--num_train_epochs 1 \
--per_device_train_batch_size 8 \
--save_strategy no \
```
The core parts of the code that may be related to the issue are:
```python
def preprocess(data, tokenizer):
input_encoding = tokenizer(data["text"], truncation=True, max_length=2048, padding="max_length",
return_tensors="np", return_special_tokens_mask=True, return_attention_mask=False)
# Create the final dictionary
result = {
"input_ids": input_encoding["input_ids"],
"special_tokens_mask": input_encoding["special_tokens_mask"],
}
return result
def make_pretrain_data_module(tokenizer: transformers.PreTrainedTokenizer, data_path, model) -> Dict:
"""Make dataset and collator for supervised fine-tuning."""
# train_dataset = PretrainedDataset(tokenizer=tokenizer, data_path=data_path)
dataset = load_dataset('text', data_files=data_path)
train_dataset = \
dataset.map(lambda data: preprocess(data, tokenizer), batched=True, desc="Processing", remove_columns=["text"])[
"train"]
data_collator = transformers.DataCollatorForLanguageModeling(tokenizer=tokenizer,
mlm=False,
return_tensors="pt")
return dict(train_dataset=train_dataset, eval_dataset=None, data_collator=data_collator)
def smart_tokenizer_and_embedding_resize(
special_tokens_dict: Dict,
tokenizer: transformers.PreTrainedTokenizer,
model: transformers.PreTrainedModel,
):
"""Resize tokenizer and embedding.
Note: This is the unoptimized version that may make your embedding size not be divisible by 64.
"""
num_new_tokens = tokenizer.add_special_tokens(special_tokens_dict)
model.resize_token_embeddings(len(tokenizer), pad_to_multiple_of=8)
if num_new_tokens > 0:
input_embeddings = model.get_input_embeddings().weight.data
output_embeddings = model.get_output_embeddings().weight.data
input_embeddings_avg = input_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
output_embeddings_avg = output_embeddings[:-num_new_tokens].mean(dim=0, keepdim=True)
input_embeddings[-num_new_tokens:] = input_embeddings_avg
output_embeddings[-num_new_tokens:] = output_embeddings_avg
def train():
parser = transformers.HfArgumentParser((ModelArguments, DataArguments, TrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
if model_args.tokenizer_path is None:
model_args.tokenizer_path = model_args.model_name_or_path
tokenizer = LlamaTokenizer.from_pretrained(model_args.tokenizer_path,
model_max_length=training_args.model_max_length, padding_side="right",
use_fast=False, cache_dir=training_args.cache_dir)
special_tokens_dict = dict()
if tokenizer.pad_token is None:
special_tokens_dict["pad_token"] = DEFAULT_PAD_TOKEN
if tokenizer.eos_token is None:
special_tokens_dict["eos_token"] = DEFAULT_EOS_TOKEN
if tokenizer.bos_token is None:
special_tokens_dict["bos_token"] = DEFAULT_BOS_TOKEN
if tokenizer.unk_token is None:
special_tokens_dict["unk_token"] = DEFAULT_UNK_TOKEN
model = LlamaForCausalLM.from_pretrained(model_args.model_name_or_path).cuda(0)
smart_tokenizer_and_embedding_resize(
special_tokens_dict=special_tokens_dict,
tokenizer=tokenizer,
model=model,
)
if model_args.peft_lora:
if model_args.lora_config is None:
raise ValueError("Please specify the path to the PEFT config.")
lora_config = LoraConfig(**LoraConfig.from_json_file(model_args.lora_config))
model = get_peft_model(model, lora_config)
print("You are using lora model!\n")
model.print_trainable_parameters()
data_module = make_pretrain_data_module(tokenizer=tokenizer, data_path=data_args.data_path, model=model)
trainer = Trainer(model=model, tokenizer=tokenizer, args=training_args, **data_module)
trainer.train()
trainer.save_state()
trainer.save_model(output_dir=training_args.output_dir)
if __name__ == "__main__":
train()
```
As we can see, the GPU memory are successfully allocated
<img width="706" alt="image" src="https://github.com/huggingface/transformers/assets/61892155/297cf1a7-c157-4f16-8ac7-505d118b20cc">
It stuck at 0% more than 20 hours
<img width="1088" alt="image" src="https://github.com/huggingface/transformers/assets/61892155/a4d11920-f2ab-4335-9859-001bc9bb7ff6">
And I can't stop it with Control + C
<img width="1091" alt="image" src="https://github.com/huggingface/transformers/assets/61892155/3951a217-c46f-4baa-8949-3fb43a4e9388">
This is the dashboard link in wandb
https://wandb.ai/innovation_club/huggingface/runs/brca7vz5?workspace=user-
Additional Information:
Upon testing, the code runs perfectly on the CPU. However, when I shift to a multi-GPU setup, the training process doesn't proceed. It's essential to note that the memory allocation on the GPUs does take place, indicating that the process has initiated, but no forward or backward computations are observed.
### Expected behavior
Would appreciate any insights or suggestions on resolving this. Thank you!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26724/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26724/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26723 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26723/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26723/comments | https://api.github.com/repos/huggingface/transformers/issues/26723/events | https://github.com/huggingface/transformers/issues/26723 | 1,936,536,171 | I_kwDOCUB6oc5zbTJr | 26,723 | [docs] Refactor optimizing inference section | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Sounds like a great idea @stevhliu ! thanks for proposing to take the lead on this! Would be very happy to look at your PR once ready! 🤗 "
] | 1,696 | 1,698 | 1,698 | MEMBER | null | Hey y’all! I’m planning on simplifying the Optimizing Inference section to be more concise. The [inference on many GPUs](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_many) is essentially the same as the [inference on one GPU](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one) doc. For example:
1. The Flash Attention section links back to the single GPU doc.
2. BetterTransformer decoder model and encoder model sections are more or less identical.
3. The Advanced usage section is also the same.
With so much of the same content being reused, I think combining the GPU inference docs under a single page makes more sense instead of adding more noise to the docs. We can add a callout at the beginning of the doc saying these optimizations work on single and multi-GPU setups, and clearly document where usage between single and multi-GPUs differ (such as using `max_memory` to allocate RAM to each GPU).
Finally, I’m also considering removing the [inference on specialized hardware](https://huggingface.co/docs/transformers/main/en/perf_infer_special) doc since this has been empty for a while now with no new updates, and this also seems to be more of an Optimum topic.
Let me know what you think! cc @LysandreJik @MKhalusova @younesbelkada | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26723/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26723/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26722 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26722/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26722/comments | https://api.github.com/repos/huggingface/transformers/issues/26722/events | https://github.com/huggingface/transformers/pull/26722 | 1,935,896,200 | PR_kwDOCUB6oc5cadwb | 26,722 | [WIP] Add FA2 for all Bart-like | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26722). All of your documentation changes will be reflected on that endpoint.",
"Verified that FA2 works by checking Whisper. Bart's attention is exactly the same as Whisper so it should as well. I will run some better benchmarks later.\r\n\r\n@ArthurZucker @younesbelkada could you do a first review here, just for the following files:\r\n- `tests/test_modeling_common.py`\r\n- `src/transformers/models/bart/modeling_bart.py`\r\n[ignore Whisper completely now, please]\r\n\r\nIt would be nice to agree on these files before running `make fix-copies` which will change 10+ other modeling files.\r\nThe implementation was pretty straight-forward as I can more or less copy-paste all the code from Llama (nice job @younesbelkada!)\r\n\r\nSome comments:\r\n- 1.) The flash attention tests are very nicely implemented in `tests/test_modeling_common.py`, but it looks like the tolerance is too high to catch any incorrect masking or other settings. E.g. in the beginning for BART I had an incorrect scaling factor in the attention and all tests passed anyways here. We might want to look into this.\r\n- 2.) I'm not super happy about passing around both `attention_mask` and `padding_mask` all the time. This makes the code really difficult to read and is quite confusing (what is the difference between the two?!). As far as I understand it the two masks are the same - the only reason we use `padding_mask` in addition to `attention_mask` is because the attention_mask is expanded and thus can't be used for FA. I wonder whether we should do a bigger refactor here actually and instead of expanding the attention_mask in the beginning we only expand it right before the attention so that we don't have to pass around both `padding_mask` and `attention_mask`. We could even cache the expanded mask if needed for speed. I would be strongly in favor of **not** having both a padding mask and an attention mask. Also cc @fxmarty here. \r\n- 3.) Can we make the automatic conversion to intended FA precision a bit more robust. E.g. see here: https://github.com/huggingface/transformers/pull/26722/files#r1353462371 . Aren't there use cases where the user would like to train in bfloat16, but might have a layer norm in fp32? cc @younesbelkada \r\n\r\n",
"Let's also make sure to update the `.md` once you have the expected performance gains and benchmark the other models where we copy from (could help debug / make sure everything is working properly).",
"Thanks for the comments:\r\n\r\n> 2- I think that it is the right approach indeed to cache the expanded mask. The attention mask format that FA-2 expects is the raw 'padding mask' --> 1 for attended tokens and 0 for padding tokens. It is possible to convert the expanded mask to padding mask but creates a huge overhead that makes it not interesting for long sequences (with FA-2 being slower than native). However I know caching some tensors as a buffer can cause some weird issues with accelerate, I would make sure that the accelerate tests pass in case we decide to go for that approach. RUN_SLOW=1 pytest -m accelerate_tests tests/models/bart/test_modeling_bart.py\r\n\r\nI see, I'm not sure how easy it'll be to get this working in a clean way then",
"Before continuing with this PR, I'll wait for https://github.com/huggingface/transformers/pull/26560 to be merged and also will see if we find a cleaner solution for 2.) ",
"OK I'll work on #26560 ASAP and let you know once that PR gets merged",
"https://github.com/huggingface/transformers/pull/26846 is now merged, let me know if you want me to help you on this PR by updating the necessary changes + benchmarking",
"**Update**:\r\n\r\nThe PR now works for Bart. @ArthurZucker @younesbelkada @fxmarty @LysandreJik could you give the design chosen for BART a look here and if ok, I'll apply it to all other Bart-like models.\r\n\r\n**Please only review `modeling_bart.py` !!!**"
] | 1,696 | 1,699 | 1,699 | MEMBER | null | # What does this PR do?
Add FA2 to all Bart-like models | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26722/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26722/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26722",
"html_url": "https://github.com/huggingface/transformers/pull/26722",
"diff_url": "https://github.com/huggingface/transformers/pull/26722.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26722.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26721 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26721/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26721/comments | https://api.github.com/repos/huggingface/transformers/issues/26721/events | https://github.com/huggingface/transformers/pull/26721 | 1,935,871,635 | PR_kwDOCUB6oc5caYZx | 26,721 | fixed docstring of bert, bert_generation and bert_japanese | {
"login": "neet-14",
"id": 105306415,
"node_id": "U_kgDOBkbZLw",
"avatar_url": "https://avatars.githubusercontent.com/u/105306415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neet-14",
"html_url": "https://github.com/neet-14",
"followers_url": "https://api.github.com/users/neet-14/followers",
"following_url": "https://api.github.com/users/neet-14/following{/other_user}",
"gists_url": "https://api.github.com/users/neet-14/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neet-14/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neet-14/subscriptions",
"organizations_url": "https://api.github.com/users/neet-14/orgs",
"repos_url": "https://api.github.com/users/neet-14/repos",
"events_url": "https://api.github.com/users/neet-14/events{/privacy}",
"received_events_url": "https://api.github.com/users/neet-14/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
}
] | [] | 1,696 | 1,697 | 1,697 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26721/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26721/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26721",
"html_url": "https://github.com/huggingface/transformers/pull/26721",
"diff_url": "https://github.com/huggingface/transformers/pull/26721.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26721.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26719 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26719/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26719/comments | https://api.github.com/repos/huggingface/transformers/issues/26719/events | https://github.com/huggingface/transformers/pull/26719 | 1,935,213,517 | PR_kwDOCUB6oc5cYF_h | 26,719 | Add support for loading GPTQ models on CPU | {
"login": "vivekkhandelwal1",
"id": 68822896,
"node_id": "MDQ6VXNlcjY4ODIyODk2",
"avatar_url": "https://avatars.githubusercontent.com/u/68822896?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vivekkhandelwal1",
"html_url": "https://github.com/vivekkhandelwal1",
"followers_url": "https://api.github.com/users/vivekkhandelwal1/followers",
"following_url": "https://api.github.com/users/vivekkhandelwal1/following{/other_user}",
"gists_url": "https://api.github.com/users/vivekkhandelwal1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vivekkhandelwal1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vivekkhandelwal1/subscriptions",
"organizations_url": "https://api.github.com/users/vivekkhandelwal1/orgs",
"repos_url": "https://api.github.com/users/vivekkhandelwal1/repos",
"events_url": "https://api.github.com/users/vivekkhandelwal1/events{/privacy}",
"received_events_url": "https://api.github.com/users/vivekkhandelwal1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@SunMarc @younesbelkada Please take a look at this PR.",
">\r\n> Can you share a small snippet you used to test out your implementation?\r\n\r\nHi @younesbelkada, here's a code snippet for the `Falcon-180B-Chat-GPTQ` model. Right now, I'm working on this and I get it running through this.\r\n```\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline\r\nimport torch\r\n\r\nmodel_name_or_path = \"TheBloke/Falcon-180B-Chat-GPTQ\"\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name_or_path,\r\n device_map=\"auto\",\r\n revision=\"main\",\r\n load_gptq_on_cpu=True)\r\nmodel = model.to(torch.float32)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)\r\n\r\nprompt = \"Tell me about AI\"\r\nprompt_template=f'''User: {prompt}\r\nAssistant: '''\r\n\r\nprint(\"\\n\\n*** Generate:\")\r\n\r\ninput_ids = tokenizer(prompt_template, return_tensors='pt').input_ids\r\noutput = model.generate(inputs=input_ids, do_sample=True, temperature=0.7, max_new_tokens=512)\r\nprint(tokenizer.decode(output[0]))\r\n\r\nprint(\"*** Pipeline:\")\r\npipe = pipeline(\r\n \"text-generation\",\r\n model=model,\r\n tokenizer=tokenizer,\r\n max_new_tokens=512,\r\n temperature=0.7,\r\n do_sample=True,\r\n top_p=0.95,\r\n repetition_penalty=1.15\r\n)\r\n\r\nprint(pipe(prompt_template)[0]['generated_text'])\r\n```",
"> Thanks for the great contribution, I did not managed to succesfully run a GPTQ model using your branch:\r\n> \r\n> ```python\r\n> import torch\r\n> from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig\r\n> \r\n> model_id = \"ybelkada/opt-125m-gptq-4bit\"\r\n> \r\n> quantization_config = GPTQConfig(\r\n> bits=4,\r\n> disable_exllama=True\r\n> )\r\n> \r\n> model = AutoModelForCausalLM.from_pretrained(model_id, load_gptq_on_cpu=True, quantization_config=quantization_config, torch_dtype=torch.float32)\r\n> tokenizer = AutoTokenizer.from_pretrained(model_id)\r\n> \r\n> text = \"Hello my name is\"\r\n> input_ids = tokenizer(text, return_tensors=\"pt\").input_ids\r\n> \r\n> out = model.generate(input_ids)\r\n> print(tokenizer.decode(out[0], skip_special_tokens=True))\r\n> ```\r\n> \r\n> Can you share a small snippet you used to test out your implementation?\r\n\r\nYou need to add `model = model.to(torch.float32)` before doing the inference. Additionally, you would require these changes: https://github.com/PanQiWei/AutoGPTQ/pull/367",
"@younesbelkada, here's a smaller falcon GPTQ variant loading and executing fine, with the changes suggested above:\r\n```\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, GPTQConfig\r\nimport torch\r\n\r\nmodel_name_or_path = \"TheBloke/falcon-7b-instruct-GPTQ\"\r\nquantization_config = GPTQConfig(\r\n bits=4,\r\n disable_exllama=True\r\n)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(model_name_or_path,\r\n device_map=\"auto\",\r\n trust_remote_code=True,\r\n revision=\"main\",\r\n quantization_config=quantization_config,\r\n load_gptq_on_cpu=True)\r\nmodel = model.to(torch.float32)\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)\r\n\r\nprompt = \"Tell me about AI\"\r\nprompt_template=f'''User: {prompt}\r\nAssistant: '''\r\n\r\nprint(\"\\n\\n*** Generate:\")\r\n\r\ninput_ids = tokenizer(prompt_template, return_tensors='pt').input_ids\r\noutput = model.generate(inputs=input_ids, do_sample=True, temperature=0.7, max_new_tokens=512)\r\nprint(tokenizer.decode(output[0]))\r\n\r\nprint(\"*** Pipeline:\")\r\npipe = pipeline(\r\n \"text-generation\",\r\n model=model,\r\n tokenizer=tokenizer,\r\n max_new_tokens=512,\r\n temperature=0.7,\r\n do_sample=True,\r\n top_p=0.95,\r\n repetition_penalty=1.15\r\n)\r\n\r\nprint(pipe(prompt_template)[0]['generated_text'])\r\n```",
"OK thanks, I'll run some tests and report back here",
"> OK thanks, I'll run some tests and report back here\r\n\r\nHi @younesbelkada, did you get a chance to look at this?",
"Hi @vivekkhandelwal1 \r\nI did not managed to run an inference with your PR on CPU, I have used another model since I don't have access to an instance that can support falcon-180B\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig\r\n\r\ncheckpoint = \"marcsun13/opt-350m-gptq-4bit\"\r\ndevice = \"cpu\" # for GPU usage or \"cpu\" for CPU usage\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n\r\nquantization_config = GPTQConfig(bits=4, disable_exllama=True)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint, low_cpu_mem_usage=True, quantization_config=quantization_config, torch_dtype=torch.bfloat16).to(device)\r\n\r\ninputs = tokenizer.encode(\"Hello how are you?\", return_tensors=\"pt\").to(device)\r\noutputs = model.generate(inputs, max_new_tokens=4, do_sample=False)\r\nprint(tokenizer.decode(outputs[0]))\r\n```\r\n\r\nI get:\r\n\r\n```bash\r\n out = torch.matmul(x.half(), weight)\r\nRuntimeError: \"addmm_impl_cpu_\" not implemented for 'Half'\r\n``` \r\n\r\nCan you let me know if you manage to run the snippet above?",
"> Hi @vivekkhandelwal1 I did not managed to run an inference with your PR on CPU, I have used another model since I don't have access to an instance that can support falcon-180B\r\n> \r\n> ```python\r\n> import torch\r\n> from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig\r\n> \r\n> checkpoint = \"marcsun13/opt-350m-gptq-4bit\"\r\n> device = \"cpu\" # for GPU usage or \"cpu\" for CPU usage\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n> \r\n> quantization_config = GPTQConfig(bits=4, disable_exllama=True)\r\n> \r\n> model = AutoModelForCausalLM.from_pretrained(checkpoint, low_cpu_mem_usage=True, quantization_config=quantization_config, torch_dtype=torch.bfloat16).to(device)\r\n> \r\n> inputs = tokenizer.encode(\"Hello how are you?\", return_tensors=\"pt\").to(device)\r\n> outputs = model.generate(inputs, max_new_tokens=4, do_sample=False)\r\n> print(tokenizer.decode(outputs[0]))\r\n> ```\r\n> \r\n> I get:\r\n> \r\n> ```shell\r\n> out = torch.matmul(x.half(), weight)\r\n> RuntimeError: \"addmm_impl_cpu_\" not implemented for 'Half'\r\n> ```\r\n> \r\n> Can you let me know if you manage to run the snippet above?\r\n\r\nHi @younesbelkada, I think you missed my comments above. Actually, to run the model on the CPU, you also need to convert the model to float. After loading the model, you have to do `model = model.to(torch.float32)`. After doing this, you should be able to run the model.",
"Thanks @vivekkhandelwal1 , indeed I forgot to cast it to fp32, however running this script:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig\r\n\r\ncheckpoint = \"marcsun13/opt-350m-gptq-4bit\"\r\ndevice = \"cpu\" # for GPU usage or \"cpu\" for CPU usage\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n\r\nquantization_config = GPTQConfig(bits=4, disable_exllama=True)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint, low_cpu_mem_usage=True, quantization_config=quantization_config).to(device)\r\n\r\nmodel = model.to(torch.float32)\r\n\r\ninputs = tokenizer.encode(\"Hello how are you?\", return_tensors=\"pt\").to(device)\r\noutputs = model.generate(inputs, max_new_tokens=4, do_sample=False)\r\nprint(tokenizer.decode(outputs[0]))\r\n```\r\n\r\nLed to the same error, what version of optimum and `auto-gptq` are you using?",
"> Thanks @vivekkhandelwal1 , indeed I forgot to cast it to fp32, however running this script:\r\n> ....\r\n> Led to the same error, what version of optimum and `auto-gptq` are you using?\r\n\r\nHi @younesbelkada, can you please add this flag `load_gptq_on_cpu=True` while loading the model? The below code runs fine for me:\r\n```\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig\r\n\r\ncheckpoint = \"marcsun13/opt-350m-gptq-4bit\"\r\ndevice = \"cpu\" # for GPU usage or \"cpu\" for CPU usage\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n\r\nquantization_config = GPTQConfig(bits=4, disable_exllama=True)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint, low_cpu_mem_usage=True, quantization_config=quantization_config, load_gptq_on_cpu=True).to(device)\r\n\r\nmodel = model.to(torch.float32)\r\n\r\ninputs = tokenizer.encode(\"Hello how are you?\", return_tensors=\"pt\").to(device)\r\noutputs = model.generate(inputs, max_new_tokens=4, do_sample=False)\r\nprint(tokenizer.decode(outputs[0]))\r\n```\r\n\r\nThe result:\r\n```\r\nDownloading pytorch_model.bin: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 125M/125M [00:03<00:00, 37.4MB/s]\r\nCUDA extension not installed.\r\nDownloading (…)neration_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 137/137 [00:00<00:00, 1.10MB/s]\r\n</s>Hello how are you?????\r\n```",
"@younesbelkada, now I got it. I think making these changes https://github.com/PanQiWei/AutoGPTQ/pull/367 locally would fix the issue for you.",
"> Can you also confirm that loading the model in float32 works for you instead of first loading your model in fp16 then cast it back?\r\n\r\n@younesbelkada, first I tried loading the model in float32 only but that didn't work. Even after passing the `torch_dtype` as `torch.float32`, the model loaded had the weight in fp16. The same was the issue with your code snippet, and that's why we have to do the casting explicitly. If you think there's an issue, and it could be fixed then I would be happy to drop this patch.\r\n\r\nEdit: Sorry, we can't drop this patch, since it enables loading the model on the CPU, but yeah `.to` is required if the `torch_dtype` is not working fine.",
"Hi @vivekkhandelwal1 \r\n\r\nHmm this is a bit strange, running: \r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig\r\n\r\ncheckpoint = \"marcsun13/opt-350m-gptq-4bit\"\r\ndevice = \"cpu\" # for GPU usage or \"cpu\" for CPU usage\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n\r\nquantization_config = GPTQConfig(bits=4, disable_exllama=True)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint, low_cpu_mem_usage=True, quantization_config=quantization_config, torch_dtype=torch.float32).to(device)\r\n\r\nfor n, p in model.named_parameters():\r\n print(n, p.dtype)\r\n```\r\n\r\nreturns:\r\n\r\n```bash\r\nmodel.decoder.embed_tokens.weight torch.float32\r\nmodel.decoder.embed_positions.weight torch.float32\r\nmodel.decoder.final_layer_norm.weight torch.float32\r\nmodel.decoder.final_layer_norm.bias torch.float32\r\nmodel.decoder.layers.0.self_attn_layer_norm.weight torch.float32\r\nmodel.decoder.layers.0.self_attn_layer_norm.bias torch.float32\r\nmodel.decoder.layers.0.final_layer_norm.weight torch.float32\r\nmodel.decoder.layers.0.final_layer_norm.bias torch.float32\r\nmodel.decoder.layers.1.self_attn_layer_norm.weight torch.float32\r\nmodel.decoder.layers.1.self_attn_layer_norm.bias torch.float32\r\nmodel.decoder.layers.1.final_layer_norm.weight torch.float32\r\nmodel.decoder.layers.1.final_layer_norm.bias torch.float32\r\nmodel.decoder.layers.2.self_attn_layer_norm.weight torch.float32\r\nmodel.decoder.layers.2.self_attn_layer_norm.bias torch.float32\r\nmodel.decoder.layers.2.final_layer_norm.weight torch.float32\r\nmodel.decoder.layers.2.final_layer_norm.bias torch.float32\r\nmodel.decoder.layers.3.self_attn_layer_norm.weight torch.float32\r\nmodel.decoder.layers.3.self_attn_layer_norm.bias torch.float32\r\nmodel.decoder.layers.3.final_layer_norm.weight torch.float32\r\nmodel.decoder.layers.3.final_layer_norm.bias torch.float32\r\nmodel.decoder.layers.4.self_attn_layer_norm.weight torch.float32\r\nmodel.decoder.layers.4.self_attn_layer_norm.bias torch.float32\r\nmodel.decoder.layers.4.final_layer_norm.weight torch.float32\r\nmodel.decoder.layers.4.final_layer_norm.bias torch.float32\r\nmodel.decoder.layers.5.self_attn_layer_norm.weight torch.float32\r\nmodel.decoder.layers.5.self_attn_layer_norm.bias torch.float32\r\nmodel.decoder.layers.5.final_layer_norm.weight torch.float32\r\nmodel.decoder.layers.5.final_layer_norm.bias torch.float32\r\nmodel.decoder.layers.6.self_attn_layer_norm.weight torch.float32\r\nmodel.decoder.layers.6.self_attn_layer_norm.bias torch.float32\r\nmodel.decoder.layers.6.final_layer_norm.weight torch.float32\r\nmodel.decoder.layers.6.final_layer_norm.bias torch.float32\r\nmodel.decoder.layers.7.self_attn_layer_norm.weight torch.float32\r\nmodel.decoder.layers.7.self_attn_layer_norm.bias torch.float32\r\nmodel.decoder.layers.7.final_layer_norm.weight torch.float32\r\nmodel.decoder.layers.7.final_layer_norm.bias torch.float32\r\nmodel.decoder.layers.8.self_attn_layer_norm.weight torch.float32\r\nmodel.decoder.layers.8.self_attn_layer_norm.bias torch.float32\r\nmodel.decoder.layers.8.final_layer_norm.weight torch.float32\r\nmodel.decoder.layers.8.final_layer_norm.bias torch.float32\r\nmodel.decoder.layers.9.self_attn_layer_norm.weight torch.float32\r\nmodel.decoder.layers.9.self_attn_layer_norm.bias torch.float32\r\nmodel.decoder.layers.9.final_layer_norm.weight torch.float32\r\nmodel.decoder.layers.9.final_layer_norm.bias torch.float32\r\nmodel.decoder.layers.10.self_attn_layer_norm.weight torch.float32\r\nmodel.decoder.layers.10.self_attn_layer_norm.bias torch.float32\r\nmodel.decoder.layers.10.final_layer_norm.weight torch.float32\r\nmodel.decoder.layers.10.final_layer_norm.bias torch.float32\r\nmodel.decoder.layers.11.self_attn_layer_norm.weight torch.float32\r\nmodel.decoder.layers.11.self_attn_layer_norm.bias torch.float32\r\nmodel.decoder.layers.11.final_layer_norm.weight torch.float32\r\nmodel.decoder.layers.11.final_layer_norm.bias torch.float32\r\n```\r\n\r\n+ inspecting some `QuantLinear` modules I can't see any attribute in fp16. Let me get back to you on this",
"Hi @vivekkhandelwal1 when building auto-gptq using https://github.com/PanQiWei/AutoGPTQ/pull/367 + running your script above I am facing.\r\n\r\n```bash\r\n File \"../qlinear_exllamav2.py\", line 137, in post_init\r\n assert self.qweight.device.type == \"cuda\"\r\nAssertionError\r\n```\r\n\r\nDo you think there is anything I missed on my end?",
"> Hi @vivekkhandelwal1 when building auto-gptq using [PanQiWei/AutoGPTQ#367](https://github.com/PanQiWei/AutoGPTQ/pull/367) + running your script above I am facing.\r\n> \r\n> ```shell\r\n> File \"../qlinear_exllamav2.py\", line 137, in post_init\r\n> assert self.qweight.device.type == \"cuda\"\r\n> AssertionError\r\n> ```\r\n> \r\n> Do you think there is anything I missed on my end?\r\n\r\nCan you please try building the auto_gptq with:\r\n```\r\nBUILD_CUDA_EXT=0 pip install -v .\r\n```",
"Hi @younesbelkada, my Auto-GPTQ PR https://github.com/PanQiWei/AutoGPTQ/pull/367 is merged. Can we go ahead with this?",
"Hi @vivekkhandelwal1 let me have another look and get back to you",
"Hi @vivekkhandelwal1 , thanks again! \r\nI made https://github.com/PanQiWei/AutoGPTQ/pull/376 in order for this script: \r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig\r\n\r\ncheckpoint = \"marcsun13/opt-350m-gptq-4bit\"\r\ndevice = \"cpu\" # for GPU usage or \"cpu\" for CPU usage\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n\r\nquantization_config = GPTQConfig(bits=4, disable_exllama=True)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint, low_cpu_mem_usage=True, quantization_config=quantization_config, torch_dtype=torch.float32).to(device)\r\n\r\ninputs = tokenizer.encode(\"Hello how are you?\", return_tensors=\"pt\").to(device)\r\noutputs = model.generate(inputs, max_new_tokens=4, do_sample=False)\r\nprint(tokenizer.decode(outputs[0]))\r\n```\r\nto work. However, I found it quite slow to get generations on CPU, out of curiosity how long does it takes to generate text on falcon 180B on your end?",
"> However, I found it quite slow to get generations on CPU, out of curiosity how long does it takes to generate text on falcon 180B on your end?\r\n\r\nHi @younesbelkada, the performance of Falcon-180B-GPTQ on the CPU is quite slow. We lower this model through Torch-MLIR and then compile via IREE to run over the CUDA and the Rocm backend. The Torch-MLIR doesn't support cuda tensors that's why we need to have this model loaded over the CPU.\r\n\r\nAlso, were you getting some errors even after my patch was merged in Auto-GPTQ? I mean, what's the reason behind this https://github.com/PanQiWei/AutoGPTQ/pull/376/files.",
"Hi @vivekkhandelwal1 \r\nYes I was getting some errors even with your patch, in my script I don't call `to()` with a torch dtype but directly load in fp32 with `torch_dtype=torch.float32`, for some reasons some weights still remained in fp32. Note we don't support explicit model casting for quantized weights since: https://github.com/huggingface/transformers/pull/26761",
"> Hi @vivekkhandelwal1 Yes I was getting some errors even with your patch, in my script I don't call `to()` with a torch dtype but directly load in fp32 with `torch_dtype=torch.float32`, for some reasons some weights still remained in fp32. Note we don't support explicit model casting for quantized weights since: #26761\r\n\r\nI'm also not doing the explicit model casting and without that it worked for me.",
"> Author\r\n\r\nI have made two more changes(https://github.com/PanQiWei/AutoGPTQ/pull/367/files#diff-c7731808a14c99106c4a0e48729c7435181a1085978ca567433d28a5f2473b1dL272) in the Auto-GPTQ if you try the complete changes of this https://github.com/PanQiWei/AutoGPTQ/pull/367/files, then your patch is not required. I have tried it just now for the script you shared.",
"Hi @vivekkhandelwal1 \r\nI cloned again auto-gptq and I can confirm I am on the main branch `git log` gives me your commit and I uninstalled and installed again auto-gptq with `BUILD_CUDA_EXT=0 pip install -U -v .` and ran into:\r\n\r\n```bash\r\ns/qlinear/qlinear_cuda_old.py\", line 269, in forward\r\n out = torch.matmul(x.to(weight.dtype), weight)\r\nRuntimeError: \"addmm_impl_cpu_\" not implemented for 'Half'\r\n```\r\n\r\nSharing again the snippet I use: \r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig\r\n\r\ncheckpoint = \"marcsun13/opt-350m-gptq-4bit\"\r\ndevice = \"cpu\" # for GPU usage or \"cpu\" for CPU usage\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n\r\nquantization_config = GPTQConfig(bits=4, disable_exllama=True)\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint, low_cpu_mem_usage=True, quantization_config=quantization_config, torch_dtype=torch.float32).to(device)\r\n\r\ninputs = tokenizer.encode(\"Hello how are you?\", return_tensors=\"pt\").to(device)\r\noutputs = model.generate(inputs, max_new_tokens=4, do_sample=False)\r\nprint(tokenizer.decode(outputs[0]))\r\n``` \r\n\r\nLet me know if I missed anything!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26719). All of your documentation changes will be reflected on that endpoint.",
"> Hi @vivekkhandelwal1 I cloned again auto-gptq and I can confirm I am on the main branch `git log` gives me your commit and I uninstalled and installed again auto-gptq with `BUILD_CUDA_EXT=0 pip install -U -v .` and ran into:\r\n> \r\n> ```shell\r\n> s/qlinear/qlinear_cuda_old.py\", line 269, in forward\r\n> out = torch.matmul(x.to(weight.dtype), weight)\r\n> RuntimeError: \"addmm_impl_cpu_\" not implemented for 'Half'\r\n> ```\r\n> \r\n> Sharing again the snippet I use:\r\n> \r\n> ```python\r\n> import torch\r\n> from transformers import AutoModelForCausalLM, AutoTokenizer, GPTQConfig\r\n> \r\n> checkpoint = \"marcsun13/opt-350m-gptq-4bit\"\r\n> device = \"cpu\" # for GPU usage or \"cpu\" for CPU usage\r\n> \r\n> tokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\n> \r\n> quantization_config = GPTQConfig(bits=4, disable_exllama=True)\r\n> \r\n> model = AutoModelForCausalLM.from_pretrained(checkpoint, low_cpu_mem_usage=True, quantization_config=quantization_config, torch_dtype=torch.float32).to(device)\r\n> \r\n> inputs = tokenizer.encode(\"Hello how are you?\", return_tensors=\"pt\").to(device)\r\n> outputs = model.generate(inputs, max_new_tokens=4, do_sample=False)\r\n> print(tokenizer.decode(outputs[0]))\r\n> ```\r\n> \r\n> Let me know if I missed anything!\r\n\r\nHi @younesbelkada, I just tried your script on another machine with a fresh setup. I cloned the auto_gptq repo built it and made the changes in transformers same as this PR, and it worked for me. If you still have some issues locally which are fixed by your changes in Auto-GPTQ then we can do that as well. But, can we now go ahead with this patch? If you want then we can discuss this on a call, I can help you set this up and try.",
"Thanks a lot @vivekkhandelwal1 \r\nThe changes look great to me, I tried it again :D and still facing the same issue :/ I suggest we wait https://github.com/PanQiWei/AutoGPTQ/pull/376 to be merged before merging this PR\r\nCan you also add one line in the documentation explaining that you can perform CPU inference with auto-gptq? the new paragraph can come right after: https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/quantization.md#exllama-kernels-for-faster-inference - what do you think?\r\n",
"> Thanks a lot @vivekkhandelwal1 The changes look great to me, I tried it again :D and still facing the same issue :/ I suggest we wait [PanQiWei/AutoGPTQ#376](https://github.com/PanQiWei/AutoGPTQ/pull/376) to be merged before merging this PR Can you also add one line in the documentation explaining that you can perform CPU inference with auto-gptq? the new paragraph can come right after: https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/quantization.md#exllama-kernels-for-faster-inference - what do you think?\r\n\r\n@younesbelkada done! Also, I'm still not sure if your patch is needed or not, also the GPTQ guys don't seem convinced about getting your patch in. Can we get this in so that I'm unblocked on this, and it works for me as well?",
"Hi @vivekkhandelwal1 \r\nThanks! I think @SunMarc managed to reproduce the same issue that I had on his end, so I prefer to wait before https://github.com/PanQiWei/AutoGPTQ/pull/385 gets merged, let me ping @fxmarty internally and see if we can get this merged ASAP\r\nThanks again!",
"> Hi @vivekkhandelwal1 Thanks! I think @SunMarc managed to reproduce the same issue that I had on his end, so I prefer to wait before [PanQiWei/AutoGPTQ#385](https://github.com/PanQiWei/AutoGPTQ/pull/385) gets merged, let me ping @fxmarty internally and see if we can get this merged ASAP Thanks again!\r\n\r\nSure @younesbelkada!",
"@younesbelkada, can we now merge this PR since the Auto-GPTQ PRs are merged now? https://github.com/PanQiWei/AutoGPTQ/pull/385\r\nhttps://github.com/huggingface/optimum/pull/1496"
] | 1,696 | 1,698 | 1,698 | CONTRIBUTOR | null | Right now, we can only load the GPTQ Quantized model on the CUDA device. The flag `load_gptq_on_cpu` adds the support to load the GPTQ models on the CPU. The larger variants of the model are hard to load/run/trace on the GPU and that's the rationale behind adding this flag.
Signed-Off By: Vivek Khandelwal <[email protected]> | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26719/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26719",
"html_url": "https://github.com/huggingface/transformers/pull/26719",
"diff_url": "https://github.com/huggingface/transformers/pull/26719.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26719.patch",
"merged_at": 1698759924000
} |
https://api.github.com/repos/huggingface/transformers/issues/26718 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26718/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26718/comments | https://api.github.com/repos/huggingface/transformers/issues/26718/events | https://github.com/huggingface/transformers/issues/26718 | 1,935,161,124 | I_kwDOCUB6oc5zWDck | 26,718 | Error while trying to run InferenceSession of onnxruntime. ValueError: Required inputs (['decoder_input_ids']) are missing from input feed (['input_ids', 'attention_mask']). | {
"login": "Burakabdi",
"id": 114612030,
"node_id": "U_kgDOBtTXPg",
"avatar_url": "https://avatars.githubusercontent.com/u/114612030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Burakabdi",
"html_url": "https://github.com/Burakabdi",
"followers_url": "https://api.github.com/users/Burakabdi/followers",
"following_url": "https://api.github.com/users/Burakabdi/following{/other_user}",
"gists_url": "https://api.github.com/users/Burakabdi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Burakabdi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Burakabdi/subscriptions",
"organizations_url": "https://api.github.com/users/Burakabdi/orgs",
"repos_url": "https://api.github.com/users/Burakabdi/repos",
"events_url": "https://api.github.com/users/Burakabdi/events{/privacy}",
"received_events_url": "https://api.github.com/users/Burakabdi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"WDYT @fxmarty ?",
"Thank you for the details @Burakabdi! The up to date documentation is here about ONNX export is here: https://huggingface.co/docs/transformers/v4.34.0/en/serialization\r\n\r\nA working code snippet, matching Transformers generation, could be:\r\n\r\n```python\r\nfrom transformers import AutoTokenizer\r\nfrom optimum.onnxruntime import ORTModelForSeq2SeqLM\r\n\r\nmodel_id = \"tsmatz/mt5_summarize_japanese\"\r\ntokenizer = AutoTokenizer.from_pretrained(model_id)\r\nmodel = ORTModelForSeq2SeqLM.from_pretrained(model_id, export=True)\r\n\r\ntext = \"サッカーのワールドカップカタール大会、世界ランキング24位でグループEに属する日本は、23日の1次リーグ初戦において、世界11位で過去4回の優勝を誇るドイツと対戦しました。試合は前半、ドイツの一方的なペースではじまりましたが、後半、日本の森保監督は攻撃的な選手を積極的に動員して流れを変えました。結局、日本は前半に1点を奪われましたが、途中出場の堂安律選手と浅野拓磨選手が後半にゴールを決め、2対1で逆転勝ちしました。ゲームの流れをつかんだ森保采配が功を奏しました。\"\r\n\r\ninputs = tokenizer(text, return_tensors=\"pt\")\r\noutputs = model.generate(**inputs, max_new_tokens=15)\r\nprint(tokenizer.batch_decode(outputs))\r\n```\r\n\r\nYou can find out more in Optimum documentation:\r\nhttps://huggingface.co/docs/optimum/main/en/exporters/onnx/overview\r\nhttps://huggingface.co/docs/optimum/main/en/onnxruntime/overview\r\n\r\nIf you'd like, you could also recode a `generate` method in pure numpy and/or with ORT C++ API.",
"Thank you so much @fxmarty and @LysandreJik . Problem solved :) "
] | 1,696 | 1,697 | 1,697 | NONE | null | I am a begginer. I received this error: `ValueError: Required inputs (['decoder_input_ids']) are missing from input feed (['input_ids', 'attention_mask'])` while trying to run inference.
**Model insights:**
* google/mt5-base
* seq2seq
* MT5ForConditionalGeneration
I exported my fine-tuned Pytorch model to ONNX by following [this guide](https://huggingface.co/docs/transformers/v4.23.1/en/serialization) with following code: `python -m transformers.onnx --model=mt5-base-finetuned-info-extraction onnx/`
**After exportation I have these files in onnx folder:**
- config.json
- model.onnx
- special_tokens_map.json
- spiece.model
- tokenizer_config.json
- tokenizer.json
Fine-tuned Pytorch model works fine and generates the output as expected. However after exporting to onnx, when I run inference I receive the error I mentioned earlier.
I try to run inference with following code:
```
from transformers import AutoTokenizer
from onnxruntime import InferenceSession
tokenizer = AutoTokenizer.from_pretrained("onnx")
session = InferenceSession("onnx/model.onnx", providers=['AzureExecutionProvider', 'CPUExecutionProvider'])
text = "This is an example Arabic text"
inputs = tokenizer(text, return_tensors="np")
outputs = session.run(output_names=["last_hidden_state"], input_feed=dict(inputs))
```
After received the error: `ValueError: Required inputs (['decoder_input_ids']) are missing from input feed (['input_ids', 'attention_mask'])` I tried to add `decoder_input_ids` to input feed with this code:
```
from transformers import AutoTokenizer
from onnxruntime import InferenceSession
import numpy as np
tokenizer = AutoTokenizer.from_pretrained("onnx")
session = InferenceSession("onnx/model.onnx", providers=['AzureExecutionProvider', 'CPUExecutionProvider'])
text = "This is an example Arabic text"
inputs = tokenizer(text, return_tensors="np")
decoder_start_token = tokenizer.pad_token_id
decoder_input_ids = np.full((1, 1), decoder_start_token, dtype=np.int64)
inputs["input_ids"] = inputs["input_ids"].astype(np.int64)
inputs["attention_mask"] = inputs["attention_mask"].astype(np.int64)
input_feed = {
"input_ids": inputs["input_ids"],
"attention_mask": inputs["attention_mask"],
"decoder_input_ids": decoder_input_ids
}
outputs = session.run(output_names=["last_hidden_state"], input_feed=input_feed)
logits = outputs[0]
predicted_token_id = np.argmax(logits)
decoded_output = tokenizer.decode(predicted_token_id, skip_special_tokens=True)
print(decoded_output)
```
I received an output from onnx model with this way, however output is not meaningful and not at all as expected.
So my question is:
My case is related to exportation or running inference?
How do I make onnx model to generate proper outputs like Pytorch model does?
Any help will be highly appreciated, thanks in advance. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26718/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26718/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26717 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26717/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26717/comments | https://api.github.com/repos/huggingface/transformers/issues/26717/events | https://github.com/huggingface/transformers/issues/26717 | 1,935,137,257 | I_kwDOCUB6oc5zV9np | 26,717 | Removing Parallelization Causing Inconsistent Shape of Model Parameters | {
"login": "kumulaor",
"id": 96123992,
"node_id": "U_kgDOBbq8WA",
"avatar_url": "https://avatars.githubusercontent.com/u/96123992?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kumulaor",
"html_url": "https://github.com/kumulaor",
"followers_url": "https://api.github.com/users/kumulaor/followers",
"following_url": "https://api.github.com/users/kumulaor/following{/other_user}",
"gists_url": "https://api.github.com/users/kumulaor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kumulaor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kumulaor/subscriptions",
"organizations_url": "https://api.github.com/users/kumulaor/orgs",
"repos_url": "https://api.github.com/users/kumulaor/repos",
"events_url": "https://api.github.com/users/kumulaor/events{/privacy}",
"received_events_url": "https://api.github.com/users/kumulaor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I also removed the use of jax.max.pmean in train_step",
"The complete error is as follows:\r\n```\r\nTraceback (most recent call last):\r\n File \"run_flax_glue.py\", line 660, in <module>\r\n main()\r\n File \"run_flax_glue.py\", line 600, in main\r\n state, train_metric, dropout_rng = train_step(state, batch, dropout_rng)\r\n File \"run_flax_glue.py\", line 559, in train_step\r\n loss, grad = grad_fn(state.params)\r\n File \"run_flax_glue.py\", line 554, in loss_fn\r\n logits = state.apply_fn(**batch, params=params, dropout_rng=dropout_rng, train=True)[0]\r\n File \"/public/home/ghfund3_a45/miniconda3/envs/qiuyang/lib/python3.8/site-packages/transformers/models/bert/modeling_flax_bert.py\", line 937, in __call__\r\n outputs = self.module.apply(\r\n File \"/public/home/ghfund3_a45/miniconda3/envs/qiuyang/lib/python3.8/site-packages/transformers/models/bert/modeling_flax_bert.py\", line 1329, in __call__\r\n outputs = self.bert(\r\n File \"/public/home/ghfund3_a45/miniconda3/envs/qiuyang/lib/python3.8/site-packages/transformers/models/bert/modeling_flax_bert.py\", line 992, in __call__\r\n hidden_states = self.embeddings(\r\n File \"/public/home/ghfund3_a45/miniconda3/envs/qiuyang/lib/python3.8/site-packages/transformers/models/bert/modeling_flax_bert.py\", line 207, in __call__\r\n inputs_embeds = self.word_embeddings(input_ids.astype(\"i4\"))\r\n File \"/public/home/ghfund3_a45/miniconda3/envs/qiuyang/lib/python3.8/site-packages/flax/linen/linear.py\", line 645, in setup\r\n self.embedding = self.param('embedding',\r\nflax.errors.ScopeParamShapeError: Inconsistent shapes between value and initializer for parameter \"embedding\" in \"/bert/embeddings/word_embeddings\": (1, 30522, 768), (30522, 768). (https://flax.readthedocs.io/en/latest/flax.errors.html#flax.errors.ScopeParamShapeError)\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,700 | 1,700 | NONE | null | ### System Info
I use the run_flax_glue.py file in transformers/examples/flax/text-classification to test and run the bert model. Now I want to remove the data parallelism used in the file. My modification to the code is to remove the use of **jax.pmap** and In the training loop, the use of **p_train_step** and **p_eval_step** was removed.
Then running my modified code an error occurred " **flax.errors.ScopeParamShapeError: Inconsistent shapes between value and initializer for parameter "embedding" in "/bert/embeddings/word_embeddings": (1, 30522, 768), (30522, 768 ).**“
Any help is appreciated. Thanks!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. Remove the use of jax.pmap and replace p_train_step and p_eval_step in the loop with train_step and eval_step
2. Run the modified run_flax_glue.py file
3. See the error
### Expected behavior
I hope it can run normally after deleting the data parallelism. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26717/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26716 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26716/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26716/comments | https://api.github.com/repos/huggingface/transformers/issues/26716/events | https://github.com/huggingface/transformers/issues/26716 | 1,935,076,605 | I_kwDOCUB6oc5zVuz9 | 26,716 | llama-2-7b-chat-hf __call__() method throws memory error | {
"login": "kaoutaar",
"id": 51215027,
"node_id": "MDQ6VXNlcjUxMjE1MDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/51215027?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kaoutaar",
"html_url": "https://github.com/kaoutaar",
"followers_url": "https://api.github.com/users/kaoutaar/followers",
"following_url": "https://api.github.com/users/kaoutaar/following{/other_user}",
"gists_url": "https://api.github.com/users/kaoutaar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kaoutaar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kaoutaar/subscriptions",
"organizations_url": "https://api.github.com/users/kaoutaar/orgs",
"repos_url": "https://api.github.com/users/kaoutaar/repos",
"events_url": "https://api.github.com/users/kaoutaar/events{/privacy}",
"received_events_url": "https://api.github.com/users/kaoutaar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey could you share a full reproducer as well as the output of `transformers-cli env`",
"@ArthurZucker sure, here's the code used in colab:\r\n\r\n```\r\nmodel_id = \"meta-llama/Llama-2-7b-chat-hf\"\r\nllamamodel = AutoModelForCausalLM.from_pretrained(model_id,\r\n torch_dtype=torch.float16,\r\n load_in_4bit=True,\r\n low_cpu_mem_usage=True, token=token)\r\nllamatokenizer = AutoTokenizer.from_pretrained(model_id)\r\nllamatokenizer.pad_token = llamatokenizer.eos_token\r\n\r\nprompt = \"some long text\"\r\ns = llamatokenizer(prompt, return_tensors='pt')\r\n#s.input_ids.shape ==> torch.Size([1, 1120])\r\nllamamodel(**s)\r\n```\r\n\r\ntried to run `transformer-cli ` but it said \"command not found\" !\r\n",
"cc @SunMarc when he comes back for big model inference + quantization ",
"Hi @kaoutaar , `.generate()` uses `torch.no_grad()`. This might explain with you are getting OOM when you `call()`. ",
"> Hi @kaoutaar , `.generate()` uses `torch.no_grad()`. This might explain with you are getting OOM when you `call()`.\r\n\r\nIf we back-prop gradients via `torch.set_grad_enabled(True)`, how can we control the memory usage? ",
"You should use [gradient checkpointing ](https://huggingface.co/docs/transformers/v4.18.0/en/performance#gradient-checkpointing)",
"> gradient checkpointing\r\n\r\nDear Arthur,\r\n\r\nWell received with thanks!\r\n\r\nBest regards,\r\n\r\nShuyue\r\nDec. 4th, 2023"
] | 1,696 | 1,701 | 1,698 | NONE | null | ### System Info
i am using "meta-llama/Llama-2-7b-chat-hf" model loaded in 4bits for inference in both kaggle (GPU T4x2) and colab (GPU T4x1), the model works fine with generate method:
`llamatokenizer.batch_decode(llamamodel.generate(inputs=s.input_ids, max_new_tokens=60))
`
OUT:
`["<s> \n<s> [INST] <<SYS>>\nYou are a helpful,.....`
but it immediatly crashes when i try to use __call__ :
`llamamodel(**s)`
OUT:
```
OutOfMemoryError Traceback (most recent call last)
[<ipython-input-45-5cc8812cd90c>](https://localhost:8080/#) in <cell line: 1>()
----> 1 llamamodel(**s)
12 frames
[/usr/local/lib/python3.10/dist-packages/torch/nn/functional.py](https://localhost:8080/#) in softmax(input, dim, _stacklevel, dtype)
1843 ret = input.softmax(dim)
1844 else:
-> 1845 ret = input.softmax(dim, dtype=dtype)
1846 return ret
1847
OutOfMemoryError: CUDA out of memory. Tried to allocate 154.00 MiB (GPU 0; 14.75 GiB total capacity; 13.07 GiB already allocated; 54.81 MiB free; 13.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
````
isn't generate() method using __call__() under the hood, why is it crashing then?
### Who can help?
@gante @younesbelkada @ArthurZucker
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
see above
### Expected behavior
see above | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26716/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26715 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26715/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26715/comments | https://api.github.com/repos/huggingface/transformers/issues/26715/events | https://github.com/huggingface/transformers/issues/26715 | 1,934,961,989 | I_kwDOCUB6oc5zVS1F | 26,715 | Support download of models from Torrent | {
"login": "filopedraz",
"id": 29598954,
"node_id": "MDQ6VXNlcjI5NTk4OTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/29598954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/filopedraz",
"html_url": "https://github.com/filopedraz",
"followers_url": "https://api.github.com/users/filopedraz/followers",
"following_url": "https://api.github.com/users/filopedraz/following{/other_user}",
"gists_url": "https://api.github.com/users/filopedraz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/filopedraz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/filopedraz/subscriptions",
"organizations_url": "https://api.github.com/users/filopedraz/orgs",
"repos_url": "https://api.github.com/users/filopedraz/repos",
"events_url": "https://api.github.com/users/filopedraz/events{/privacy}",
"received_events_url": "https://api.github.com/users/filopedraz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi! I'm new to open source, can you please help me understand the issue better? I understand I have to show a POC for this, but has this functionality already been implemented, or would I have to make it myself? \r\nThank You in Advance!",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hey! This is currently not planned 🤗 "
] | 1,696 | 1,700 | 1,700 | NONE | null | ### Feature request
### Description
Support models download from Torrent passing as a parameter in the `AutoTokenizer` and `AutoModel` class `magnet_url`.
### Example Usage
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("magnet:?xt=urn:btih:XXXXXX")
model = AutoModel.from_pretrained("magnet:?xt=urn:btih:XXXXXX")
inputs = tokenizer("Hello world!", return_tensors="pt")
outputs = model(**inputs)
```
### Motivation
Models could be censored and removed from HF. In order to keep them alive, Torrent could be a solution.
### Your contribution
I can do a POC of the implementation using a model mirrored from HF to Torrent. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26715/reactions",
"total_count": 12,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 2,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26715/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26714 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26714/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26714/comments | https://api.github.com/repos/huggingface/transformers/issues/26714/events | https://github.com/huggingface/transformers/issues/26714 | 1,934,927,411 | I_kwDOCUB6oc5zVKYz | 26,714 | Code Llama HF tokenizer length is 32004 whereas vocab_size is 32000 | {
"login": "dineshkh",
"id": 14121108,
"node_id": "MDQ6VXNlcjE0MTIxMTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/14121108?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dineshkh",
"html_url": "https://github.com/dineshkh",
"followers_url": "https://api.github.com/users/dineshkh/followers",
"following_url": "https://api.github.com/users/dineshkh/following{/other_user}",
"gists_url": "https://api.github.com/users/dineshkh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dineshkh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dineshkh/subscriptions",
"organizations_url": "https://api.github.com/users/dineshkh/orgs",
"repos_url": "https://api.github.com/users/dineshkh/repos",
"events_url": "https://api.github.com/users/dineshkh/events{/privacy}",
"received_events_url": "https://api.github.com/users/dineshkh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @ArthurZucker ",
"Hey, that's because the tokenizer was not updated. I opened [this](https://huggingface.co/codellama/CodeLlama-34b-hf/discussions/15/files) but will open anotherone to make sure they merge is and we have the latest format. \r\nThanks for reporting 🤗 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Duplicated issue: #27053 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,702 | 1,702 | NONE | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-4.18.0-477.15.1.el8_8.x86_64-x86_64-with-glibc2.28
- Python version: 3.11.2
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu118 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-34b-hf")
model = AutoModelForCausalLM.from_pretrained("codellama/CodeLlama-34b-hf")
embedding_size = model.get_input_embeddings().weight.shape[0]
print("Length of tokenizer: {} : ".format(len(tokenizer)))
print("vocab_size: {} : ".format(model.config.vocab_size))
print("embedding_size: {} : ".format(embedding_size))
```
```
Length of tokenizer: 32004
vocab_size: 32000
embedding_size: 32000
```
### Expected behavior
```
Length of tokenizer: 32004
vocab_size: 32004
embedding_size: 32004
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26714/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26713 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26713/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26713/comments | https://api.github.com/repos/huggingface/transformers/issues/26713/events | https://github.com/huggingface/transformers/pull/26713 | 1,934,852,600 | PR_kwDOCUB6oc5cW1dC | 26,713 | `Copied from` for test files | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"well, I need to check why \r\n\r\n```\r\nci/circleci: tests_repo_utils\r\n```\r\nfails.",
"_The documentation is not available anymore as the PR was closed or merged._",
"Will have to perform some fixes about copied statements before merge.",
"If there is a single place, say `__init__`, being a non-copy, then so far we can't put `# Copied from` at the class level. We can rework the script a bit to have `# Ignore copied` in a method, but would be better to do this in a follow up PR.",
"Ok!"
] | 1,696 | 1,697 | 1,697 | COLLABORATOR | null | # What does this PR do?
`Copied from` for test files.
Run `make fix-copies` will show
```bash
python utils/check_copies.py --fix_and_overwrite
Detected changes, rewriting tests/models\longformer\test_tokenization_longformer.py.
```
I will need to update `test_tokenization_longformer.py` before merge. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26713/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26713",
"html_url": "https://github.com/huggingface/transformers/pull/26713",
"diff_url": "https://github.com/huggingface/transformers/pull/26713.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26713.patch",
"merged_at": 1697026329000
} |
https://api.github.com/repos/huggingface/transformers/issues/26712 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26712/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26712/comments | https://api.github.com/repos/huggingface/transformers/issues/26712/events | https://github.com/huggingface/transformers/pull/26712 | 1,934,799,867 | PR_kwDOCUB6oc5cWptL | 26,712 | Batch inference with text for BLIP 2 processing | {
"login": "Keracles",
"id": 103105238,
"node_id": "U_kgDOBiVC1g",
"avatar_url": "https://avatars.githubusercontent.com/u/103105238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Keracles",
"html_url": "https://github.com/Keracles",
"followers_url": "https://api.github.com/users/Keracles/followers",
"following_url": "https://api.github.com/users/Keracles/following{/other_user}",
"gists_url": "https://api.github.com/users/Keracles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Keracles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Keracles/subscriptions",
"organizations_url": "https://api.github.com/users/Keracles/orgs",
"repos_url": "https://api.github.com/users/Keracles/repos",
"events_url": "https://api.github.com/users/Keracles/events{/privacy}",
"received_events_url": "https://api.github.com/users/Keracles/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,696 | 1,696 | 1,696 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR makes batch inference for blip 2 processing with one text instead of passing a list of text.
Issue #26633
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26712/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26712/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26712",
"html_url": "https://github.com/huggingface/transformers/pull/26712",
"diff_url": "https://github.com/huggingface/transformers/pull/26712.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26712.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26711 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26711/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26711/comments | https://api.github.com/repos/huggingface/transformers/issues/26711/events | https://github.com/huggingface/transformers/pull/26711 | 1,934,775,692 | PR_kwDOCUB6oc5cWkJI | 26,711 | Fix stale bot for locked issues | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Just the locked issues, I believe there's a single one here but unsure"
] | 1,696 | 1,697 | 1,697 | MEMBER | null | The stalebot has crashed once again due to the following error:
```
Traceback (most recent call last):
File "scripts/stale.py", line 67, in <module>
main()
File "scripts/stale.py", line 57, in main
issue.create_comment(
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/github/Issue.py", line 290, in create_comment
headers, data = self._requester.requestJsonAndCheck("POST", f"{self.url}/comments", input=post_parameters)
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/github/Requester.py", line 494, in requestJsonAndCheck
return self.__check(*self.requestJson(verb, url, parameters, headers, input, self.__customConnection(url)))
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/github/Requester.py", line 629, in requestJson
return self.__requestEncode(cnx, verb, url, parameters, headers, input, encode)
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/github/Requester.py", line 726, in __requestEncode
status, responseHeaders, output = self.__requestRaw(cnx, verb, url, requestHeaders, encoded_input)
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/github/Requester.py", line 760, in __requestRaw
response = cnx.getresponse()
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/github/Requester.py", line 174, in getresponse
r = verb(
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/requests/sessions.py", line 637, in post
return self.request("POST", url, data=data, json=json, **kwargs)
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/urllib3/connectionpool.py", line 931, in urlopen
retries = retries.increment(method, url, response=response, _pool=self)
File "/opt/hostedtoolcache/Python/3.8.18/x64/lib/python3.8/site-packages/github/GithubRetry.py", line 179, in increment
raise Requester.createException(response.status, response.headers, content) # type: ignore
github.GithubException.GithubException: 403 {"message": "Unable to create comment because issue is locked.", "documentation_url": "https://docs.github.com/articles/locking-conversations/"}
```
I believe it's the first time it's encountering a locked issue, hence the failure. I tested locally that it ran fine on this specific issue.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26711/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26711",
"html_url": "https://github.com/huggingface/transformers/pull/26711",
"diff_url": "https://github.com/huggingface/transformers/pull/26711.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26711.patch",
"merged_at": 1697033335000
} |
https://api.github.com/repos/huggingface/transformers/issues/26710 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26710/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26710/comments | https://api.github.com/repos/huggingface/transformers/issues/26710/events | https://github.com/huggingface/transformers/issues/26710 | 1,934,626,048 | I_kwDOCUB6oc5zUA0A | 26,710 | Error when "device_map='auto'" meets "load_state_dict" | {
"login": "shutttttdown",
"id": 117346792,
"node_id": "U_kgDOBv6R6A",
"avatar_url": "https://avatars.githubusercontent.com/u/117346792?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shutttttdown",
"html_url": "https://github.com/shutttttdown",
"followers_url": "https://api.github.com/users/shutttttdown/followers",
"following_url": "https://api.github.com/users/shutttttdown/following{/other_user}",
"gists_url": "https://api.github.com/users/shutttttdown/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shutttttdown/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shutttttdown/subscriptions",
"organizations_url": "https://api.github.com/users/shutttttdown/orgs",
"repos_url": "https://api.github.com/users/shutttttdown/repos",
"events_url": "https://api.github.com/users/shutttttdown/events{/privacy}",
"received_events_url": "https://api.github.com/users/shutttttdown/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @SunMarc when you're back from leave!",
"Sorry for the late reply @shutttttdown, this has been fixed in in the version v0.24 (october 24) of accelerate through this [PR](https://github.com/huggingface/accelerate/pull/1971). LMK if it works on your end ! Thanks also for the great reproducer. The issue was that the hooks were not properly copied. Hence, they were still referencing the old forward (model_2) instead of the forward of `model2_copied`. ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,703 | 1,703 | NONE | null | ### System Info
- `transformers` version: 4.32.1
- Platform: Linux-5.4.0-164-generic-x86_64-with-glibc2.31
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): 2.13.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help?
@stevhliu @MKhalusova @sgugger
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
Hi everone,
I conducted an experiment on a machine with two A100 GPUs, and I encountered an issue. After simplifying, the scenario:
I set device_map='auto' and loaded two models, model1 and model2, both of which have the same structure. Then, I created a copy of model2, referred to as model2_copied, and attempted to load all its parameters from model1 using load_state_dict.
Upon comparing model2_copied after loading the parameters with model1, I found that their parameters were indeed identical. However, when I actually performed inference, I noticed that the inference results of model2_copied remained the same as model2, rather than matching my expectation of being the same as model1.
My code is as below:
```python
from transformers import AutoModel, AutoTokenizer, LlamaForCausalLM, AutoModelForCausalLM, pipeline
import torch
import copy
model1_path = '...'
model1 = AutoModelForCausalLM.from_pretrained(model1_path, trust_remote_code=True, device_map='auto')
model2_path = '...'
model2 = AutoModelForCausalLM.from_pretrained(model2_path, trust_remote_code=True, device_map='auto')
model2_copied = copy.deepcopy(model2)
tokenizer = AutoTokenizer.from_pretrained(model2_path, use_fast=False, trust_remote_code=True)
# copy parameters from model1 to model2_copied
model2_copied.load_state_dict(model1.state_dict())
model2_copied.lm_head.load_state_dict(model1.lm_head.state_dict())
def compare_model_param(model1, model2):
if str(model1) == str(model2):
print("The model structures are identical.")
else:
print("The model structures are not identical.")
state_dict1 = model1.state_dict()
state_dict2 = model2.state_dict()
param_names1 = set(state_dict1.keys())
param_names2 = set(state_dict2.keys())
Flag = False
if param_names1 == param_names2:
for param_name in param_names1:
if not torch.equal(state_dict1[param_name], state_dict2[param_name]):
Flag = True
print(f"parameter {param_name} is not identical.")
if Flag == False:
print(f"All parameters are identical.")
compare_model_param(model1, model2_copied)
# The model structures are identical.
# All parameters are identical.
inp = tokenizer('Hello', return_tensors='pt', return_token_type_ids=False)
print(model1(**inp).logits)
print('*'*20)
print(model2(**inp).logits)
print('*'*20)
print(model2_copied(**inp).logits)
'''
tensor([[[11.2086, 11.2117, 25.7801, ..., 12.2967, 11.0015, 11.6876]]],
grad_fn=<ToCopyBackward0>)
********************
tensor([[[35.8617, 35.9438, 53.7408, ..., 34.0522, 34.9409, 33.9223]]],
grad_fn=<ToCopyBackward0>)
********************
tensor([[[35.8617, 35.9438, 53.7408, ..., 34.0522, 34.9409, 33.9223]]],
grad_fn=<ToCopyBackward0>)
'''
# Find that the output of model2_copied is the same as model2 rather than model1
```
However, when I loaded them without device_map='auto' and instead directly placed them on a GPU using .cuda(), the aforementioned issue did not exist, and the inference results of model2_copied matched those of model1.
### Expected behavior
See above,
I'm not sure if I made a mistake somewhere or if there was a misunderstanding. Thanks all. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26710/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26709 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26709/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26709/comments | https://api.github.com/repos/huggingface/transformers/issues/26709/events | https://github.com/huggingface/transformers/pull/26709 | 1,934,538,891 | PR_kwDOCUB6oc5cVvIm | 26,709 | [docstring] Fix docstring for `CodeLlamaTokenizer` | {
"login": "Bojun-Feng",
"id": 102875484,
"node_id": "U_kgDOBiHBXA",
"avatar_url": "https://avatars.githubusercontent.com/u/102875484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bojun-Feng",
"html_url": "https://github.com/Bojun-Feng",
"followers_url": "https://api.github.com/users/Bojun-Feng/followers",
"following_url": "https://api.github.com/users/Bojun-Feng/following{/other_user}",
"gists_url": "https://api.github.com/users/Bojun-Feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bojun-Feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bojun-Feng/subscriptions",
"organizations_url": "https://api.github.com/users/Bojun-Feng/orgs",
"repos_url": "https://api.github.com/users/Bojun-Feng/repos",
"events_url": "https://api.github.com/users/Bojun-Feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bojun-Feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh \r\n\r\nPR for `CodeLlamaTokenizer`, as mentioned in #26666 ",
"Could you share how you fix the env. issue to get it work? Thanks a lot!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26709). All of your documentation changes will be reflected on that endpoint.",
"> Could you share how you fix the env. issue to get it work? Thanks a lot!\r\n\r\nRunning `pip install -e \".[dev-torch]\"` fixed the issue for me."
] | 1,696 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26638
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26709/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26709/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26709",
"html_url": "https://github.com/huggingface/transformers/pull/26709",
"diff_url": "https://github.com/huggingface/transformers/pull/26709.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26709.patch",
"merged_at": 1697040083000
} |
https://api.github.com/repos/huggingface/transformers/issues/26708 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26708/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26708/comments | https://api.github.com/repos/huggingface/transformers/issues/26708/events | https://github.com/huggingface/transformers/pull/26708 | 1,934,491,144 | PR_kwDOCUB6oc5cVkYN | 26,708 | we need to register test backend first before using test device of third-party accelerators | {
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"https://github.com/huggingface/transformers/pull/25870 contains the modifications of this PR, closed"
] | 1,696 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/pull/25870#issuecomment-1754523487
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
cc @ydshieh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26708/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26708",
"html_url": "https://github.com/huggingface/transformers/pull/26708",
"diff_url": "https://github.com/huggingface/transformers/pull/26708.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26708.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26707 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26707/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26707/comments | https://api.github.com/repos/huggingface/transformers/issues/26707/events | https://github.com/huggingface/transformers/pull/26707 | 1,934,436,961 | PR_kwDOCUB6oc5cVYPN | 26,707 | [DOCSTRING]: `SamConfig`, `SamPromptEncoderConfig`, | {
"login": "rajveer43",
"id": 64583161,
"node_id": "MDQ6VXNlcjY0NTgzMTYx",
"avatar_url": "https://avatars.githubusercontent.com/u/64583161?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rajveer43",
"html_url": "https://github.com/rajveer43",
"followers_url": "https://api.github.com/users/rajveer43/followers",
"following_url": "https://api.github.com/users/rajveer43/following{/other_user}",
"gists_url": "https://api.github.com/users/rajveer43/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rajveer43/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rajveer43/subscriptions",
"organizations_url": "https://api.github.com/users/rajveer43/orgs",
"repos_url": "https://api.github.com/users/rajveer43/repos",
"events_url": "https://api.github.com/users/rajveer43/events{/privacy}",
"received_events_url": "https://api.github.com/users/rajveer43/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh review it",
"Please run `make fixup`, check the changes, and push it. The CI is not green :-) so far."
] | 1,696 | 1,699 | 1,699 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes and part of #26638
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26707/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26707/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26707",
"html_url": "https://github.com/huggingface/transformers/pull/26707",
"diff_url": "https://github.com/huggingface/transformers/pull/26707.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26707.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26706 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26706/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26706/comments | https://api.github.com/repos/huggingface/transformers/issues/26706/events | https://github.com/huggingface/transformers/issues/26706 | 1,934,118,677 | I_kwDOCUB6oc5zSE8V | 26,706 | Add an option to decide whether to store the checkpoint and rng_state. | {
"login": "timturing",
"id": 86722018,
"node_id": "MDQ6VXNlcjg2NzIyMDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/86722018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timturing",
"html_url": "https://github.com/timturing",
"followers_url": "https://api.github.com/users/timturing/followers",
"following_url": "https://api.github.com/users/timturing/following{/other_user}",
"gists_url": "https://api.github.com/users/timturing/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timturing/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timturing/subscriptions",
"organizations_url": "https://api.github.com/users/timturing/orgs",
"repos_url": "https://api.github.com/users/timturing/repos",
"events_url": "https://api.github.com/users/timturing/events{/privacy}",
"received_events_url": "https://api.github.com/users/timturing/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | open | false | null | [] | [
"The issue I reported is still impacting my work as our group is building a pretty big project based on this, and I believe it's an important one to address. I would be grateful if you could help me.\r\n",
"cc @pacman100 and @muellerzr if you think this is something we ought to have! ",
"@pacman100 @muellerzr I also met the save situtation. Could you provide an option to save disk memory?",
"Hello @timturing, checkpoints during training are meant for resuming it and therefore save the model, optimizer and scheduler and rng states. What you want is to just save the model without considering the ability to resume training. Is that understanding correct?",
"@pacman100 Yes, exactly.",
"Just like the `save_strategy` in the Trainer (https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.save_strategy).\r\nSince SFT is very mature, we do not need to save the intermediate results for resuming.",
"@pacman100 @muellerzr Hi, could you improve this? This is very useful for me."
] | 1,696 | 1,699 | null | NONE | null | **Motivation:**
Currently, when using the Transformers library in combination with DeepSpeed for training large language models like LLMs, checkpoints (e.g. `bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt`) are automatically saved along with the `rng_state`, which can lead to significant disk space usage. In scenarios where multiple GPUs are employed for training, this can quickly become a storage bottleneck, especially when shared by a team. Sometimes we just want to keep the bin file (e.g. `pytorch_model-00001-of-00002.bin`) as it's enough for load again.
**Feature Request:**
I propose adding a configurable option to decide whether to store the checkpoint and `rng_state` during training. This will give users the flexibility to choose when to save checkpoints and reduce the disk space required.
**Proposed Solution:**
1. Add a new parameter, such as `save_checkpoint_enabled`, to the DeepSpeed configuration file. Users can set this parameter to `True` or `False` to control whether checkpoints and `rng_state` should be saved during training.
2. Modify the `trainer.py` script in the Transformers library to include a condition for `self.save_checkpoint_enabled` in the `_save_checkpoint` function. Here's a code snippet illustrating the change:
```python
if self.is_deepspeed_enabled and self.save_checkpoint_enabled:
# Save the checkpoint
```
This change will allow users to save disk space by not storing checkpoints when not needed, and it can help alleviate the storage challenges associated with large-scale language model training.
I have already submitted this issue to the DeepSpeed library #https://github.com/microsoft/DeepSpeed/issues/4403#issue-1913025248 , as this feature may require collaboration between both libraries. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26706/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26705 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26705/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26705/comments | https://api.github.com/repos/huggingface/transformers/issues/26705/events | https://github.com/huggingface/transformers/pull/26705 | 1,934,057,768 | PR_kwDOCUB6oc5cUDtc | 26,705 | Fix Typo: table in deepspeed.md | {
"login": "Pairshoe",
"id": 61651272,
"node_id": "MDQ6VXNlcjYxNjUxMjcy",
"avatar_url": "https://avatars.githubusercontent.com/u/61651272?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Pairshoe",
"html_url": "https://github.com/Pairshoe",
"followers_url": "https://api.github.com/users/Pairshoe/followers",
"following_url": "https://api.github.com/users/Pairshoe/following{/other_user}",
"gists_url": "https://api.github.com/users/Pairshoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Pairshoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Pairshoe/subscriptions",
"organizations_url": "https://api.github.com/users/Pairshoe/orgs",
"repos_url": "https://api.github.com/users/Pairshoe/repos",
"events_url": "https://api.github.com/users/Pairshoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/Pairshoe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26705). All of your documentation changes will be reflected on that endpoint."
] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
The table in this doc has a syntactical mistake thus not rendering.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26705/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26705",
"html_url": "https://github.com/huggingface/transformers/pull/26705",
"diff_url": "https://github.com/huggingface/transformers/pull/26705.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26705.patch",
"merged_at": 1696931410000
} |
https://api.github.com/repos/huggingface/transformers/issues/26704 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26704/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26704/comments | https://api.github.com/repos/huggingface/transformers/issues/26704/events | https://github.com/huggingface/transformers/pull/26704 | 1,933,746,685 | PR_kwDOCUB6oc5cS9US | 26,704 | Add-support for commit description | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"add tests"
] | 1,696 | 1,698 | 1,698 | COLLABORATOR | null | # What does this PR do?
Let's make our lifes easier.
History is messed will fix | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26704/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26704",
"html_url": "https://github.com/huggingface/transformers/pull/26704",
"diff_url": "https://github.com/huggingface/transformers/pull/26704.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26704.patch",
"merged_at": 1698316629000
} |
https://api.github.com/repos/huggingface/transformers/issues/26703 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26703/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26703/comments | https://api.github.com/repos/huggingface/transformers/issues/26703/events | https://github.com/huggingface/transformers/pull/26703 | 1,933,691,815 | PR_kwDOCUB6oc5cSxEi | 26,703 | [JAX] Replace uses of `jnp.array` in types with `jnp.ndarray`. | {
"login": "hvaara",
"id": 1535968,
"node_id": "MDQ6VXNlcjE1MzU5Njg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1535968?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hvaara",
"html_url": "https://github.com/hvaara",
"followers_url": "https://api.github.com/users/hvaara/followers",
"following_url": "https://api.github.com/users/hvaara/following{/other_user}",
"gists_url": "https://api.github.com/users/hvaara/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hvaara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hvaara/subscriptions",
"organizations_url": "https://api.github.com/users/hvaara/orgs",
"repos_url": "https://api.github.com/users/hvaara/repos",
"events_url": "https://api.github.com/users/hvaara/events{/privacy}",
"received_events_url": "https://api.github.com/users/hvaara/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26703). All of your documentation changes will be reflected on that endpoint."
] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | `jnp.array` is a function, not a type:
https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.array.html so it never makes sense to use `jnp.array` in a type annotation. Presumably the intent was to write `jnp.ndarray` aka `jax.Array`.
For a similar PR in `diffusers`, please see https://github.com/huggingface/diffusers/pull/4719.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
cc @pcuenca @patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26703/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26703",
"html_url": "https://github.com/huggingface/transformers/pull/26703",
"diff_url": "https://github.com/huggingface/transformers/pull/26703.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26703.patch",
"merged_at": 1696966517000
} |
https://api.github.com/repos/huggingface/transformers/issues/26702 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26702/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26702/comments | https://api.github.com/repos/huggingface/transformers/issues/26702/events | https://github.com/huggingface/transformers/issues/26702 | 1,933,661,495 | I_kwDOCUB6oc5zQVU3 | 26,702 | Getting GPU crash when running compute_metrics within SFTTrainer | {
"login": "matthewchung74",
"id": 1685700,
"node_id": "MDQ6VXNlcjE2ODU3MDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1685700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matthewchung74",
"html_url": "https://github.com/matthewchung74",
"followers_url": "https://api.github.com/users/matthewchung74/followers",
"following_url": "https://api.github.com/users/matthewchung74/following{/other_user}",
"gists_url": "https://api.github.com/users/matthewchung74/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matthewchung74/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matthewchung74/subscriptions",
"organizations_url": "https://api.github.com/users/matthewchung74/orgs",
"repos_url": "https://api.github.com/users/matthewchung74/repos",
"events_url": "https://api.github.com/users/matthewchung74/events{/privacy}",
"received_events_url": "https://api.github.com/users/matthewchung74/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"cc @pacman100 @muellerzr",
"thank you @muellerzr ",
"I believe https://github.com/huggingface/transformers/pull/27458 will help with this after a bit more fixing in the PR",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,707 | 1,707 | NONE | null | ### System Info
This is in colab https://colab.research.google.com/drive/1qTIxG_9R8xIYryY5mEPXIydZB5QeSnKp#scrollTo=5VM67iQMymv5
I have an Mistral Fine tune in Colab. Before training, I try running
```
trainer.evaluate()
```
and get the following error message
```
[/usr/local/lib/python3.10/dist-packages/transformers/trainer_pt_utils.py](https://localhost:8080/#) in torch_pad_and_concatenate(tensor1, tensor2, padding_index)
86
87 # Now let's fill the result tensor
---> 88 result = tensor1.new_full(new_shape, padding_index)
89 result[: tensor1.shape[0], : tensor1.shape[1]] = tensor1
90 result[tensor1.shape[0] :, : tensor2.shape[1]] = tensor2
OutOfMemoryError: CUDA out of memory. Tried to allocate 12.94 GiB (GPU 0; 39.56 GiB total capacity; 18.33 GiB already allocated; 12.29 GiB free; 25.71 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
however, when compute_metrics is commented out, i can evaluate/train the model
Any help is appreciated. Thanks!
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
1. run the colab https://colab.research.google.com/drive/1qTIxG_9R8xIYryY5mEPXIydZB5QeSnKp#scrollTo=5VM67iQMymv5
2. see the error
3. comment out comput_metrics
4. run again and do not see error
### Expected behavior
I don't see compute_metrics requiring any additional resources, so am unsure why it would be using more gpu and would expect it not to crash. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26702/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26701 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26701/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26701/comments | https://api.github.com/repos/huggingface/transformers/issues/26701/events | https://github.com/huggingface/transformers/pull/26701 | 1,933,535,018 | PR_kwDOCUB6oc5cSOb7 | 26,701 | [Assistant Generation] Improve Encoder Decoder | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The failing Hub [test](https://app.circleci.com/pipelines/github/huggingface/transformers/75063/workflows/31a94fbe-b7f7-456a-bff9-5f149743c3fd/jobs/951835) seems to be flaky.\r\n\r\nThis PR is ready for a final review."
] | 1,696 | 1,697 | 1,697 | MEMBER | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR speeds up assistant generation / speculative decoding for encoder-decoder models such as Distill-Whisper by ~20-30%.
Improvements:
- If assistant and model share same encoder, let's allow the user to pass `assistant_encoder_outputs` so that the inputs are not encoded twice (gives ~20% speed-up)
- In the small loop I don't think we have to allocate tensors for the attention mask all the time. This is done automatically by the model if necessary (gives ~3,4% speed-up)
- The heuristic to increase / decrease the number of "look-ahead" tokens doesn't work well for whisper, can we maybe allow the user to somehow disable it? Maybe via a config attribute? | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26701/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26701/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26701",
"html_url": "https://github.com/huggingface/transformers/pull/26701",
"diff_url": "https://github.com/huggingface/transformers/pull/26701.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26701.patch",
"merged_at": 1697032341000
} |
https://api.github.com/repos/huggingface/transformers/issues/26700 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26700/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26700/comments | https://api.github.com/repos/huggingface/transformers/issues/26700/events | https://github.com/huggingface/transformers/issues/26700 | 1,933,499,257 | I_kwDOCUB6oc5zPtt5 | 26,700 | NotImplementedError: Cannot copy out of meta tensor; no data! when using device = "auto" in pipeline() | {
"login": "yongjer",
"id": 54315206,
"node_id": "MDQ6VXNlcjU0MzE1MjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/54315206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yongjer",
"html_url": "https://github.com/yongjer",
"followers_url": "https://api.github.com/users/yongjer/followers",
"following_url": "https://api.github.com/users/yongjer/following{/other_user}",
"gists_url": "https://api.github.com/users/yongjer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yongjer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yongjer/subscriptions",
"organizations_url": "https://api.github.com/users/yongjer/orgs",
"repos_url": "https://api.github.com/users/yongjer/repos",
"events_url": "https://api.github.com/users/yongjer/events{/privacy}",
"received_events_url": "https://api.github.com/users/yongjer/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @SunMarc ",
"Hello @yongjer @LysandreJik @SunMarc \r\n\r\nThis seems a tricky bug. I would like to try to fix it, but maybe I will need some help on how to approach it.\r\n\r\nThe issue is:\r\n\r\nWhen you use `device_map = \"auto\"`, internally `transformers` creates a context manager from `accelerate` (https://github.com/huggingface/transformers/blob/21dc5859421cf0d7d82d374b10f533611745a8c5/src/transformers/modeling_utils.py#L3081 and https://github.com/huggingface/transformers/blob/21dc5859421cf0d7d82d374b10f533611745a8c5/src/transformers/modeling_utils.py#L3086). You can see that this context manager basically set the default device to be \"meta\" (https://github.com/huggingface/accelerate/blob/dab62832de44c84e80045e4db53e087b71d0fd85/src/accelerate/big_modeling.py#L51-L81).\r\n\r\nDuring the instantiation of the DETR model, there is a step where we want frozen the batch norm (https://github.com/huggingface/transformers/blob/21dc5859421cf0d7d82d374b10f533611745a8c5/src/transformers/models/detr/modeling_detr.py#L307-L327), but the backbone, which was created with timm, is using meta device, i.e., the weight are not materialized so we can't copy.\r\n\r\nAs a workaround we can try to guarantee that the backbone model will be created on a physical device, but it breaks a bit the idea of device_map.\r\n\r\nAny thoughts on how to solve this issue?",
"If I'm not wrong (I usually am), we could solve it by not trying to load weights on the DetrFrozenBatchNorm2D if the device is `meta`, something like:\r\n\r\n```python\r\ndef replace_batch_norm(model):\r\n r\"\"\"\r\n Recursively replace all `torch.nn.BatchNorm2d` with `DetrFrozenBatchNorm2d`.\r\n\r\n Args:\r\n model (torch.nn.Module):\r\n input model\r\n \"\"\"\r\n for name, module in model.named_children():\r\n if isinstance(module, nn.BatchNorm2d):\r\n new_module = DetrFrozenBatchNorm2d(module.num_features)\r\n\r\n if not module.weight.device == torch.device(\"meta\"):\r\n new_module.weight.data.copy_(module.weight)\r\n new_module.bias.data.copy_(module.bias)\r\n new_module.running_mean.data.copy_(module.running_mean)\r\n new_module.running_var.data.copy_(module.running_var)\r\n\r\n model._modules[name] = new_module\r\n\r\n if len(list(module.children())) > 0:\r\n replace_batch_norm(module)\r\n```\r\n\r\nAnd then add something like\r\n\r\n```python\r\nself._no_split_modules = [\"DetrModel\", \"DetrMLPPredictionHead\", \"nn.Linear\"]\r\n```\r\n\r\nTo the `DetrForObjectDetection` constructor method.",
"This should be solved once the PR is merged ! "
] | 1,696 | 1,698 | 1,698 | NONE | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.31
- Python version: 3.11.6
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.0.dev0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- GPU: RTX2060 6G
### Who can help?
@Narsil
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [x] My own task or dataset (give details below)
### Reproduction
here is my code below:
```
def ndarray_to_image(ndarray):
return Image.fromarray(np.uint8(ndarray))
import cv2
from transformers import pipeline
from PIL import Image
import numpy as np
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
cv2.imshow('frame', frame)
image = ndarray_to_image(frame)
pipe = pipeline("object-detection", model="facebook/detr-resnet-50", device_map="auto")
result = pipe(image)
print(result)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
```
when set pipeline(device_map="auto") will raise an error:
```
{
"name": "NotImplementedError",
"message": "Cannot copy out of meta tensor; no data!",
"stack": "---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
/home/yongjer/程式/object detection/main.ipynb 儲存格 1 line 1
<a href='vscode-notebook-cell:/home/yongjer/%E7%A8%8B%E5%BC%8F/object%20detection/main.ipynb#W6sZmlsZQ%3D%3D?line=11'>12</a> cv2.imshow('frame', frame)
<a href='vscode-notebook-cell:/home/yongjer/%E7%A8%8B%E5%BC%8F/object%20detection/main.ipynb#W6sZmlsZQ%3D%3D?line=13'>14</a> image = ndarray_to_image(frame)
---> <a href='vscode-notebook-cell:/home/yongjer/%E7%A8%8B%E5%BC%8F/object%20detection/main.ipynb#W6sZmlsZQ%3D%3D?line=15'>16</a> pipe = pipeline(\"object-detection\", model=\"facebook/detr-resnet-50\", device_map=\"auto\")
<a href='vscode-notebook-cell:/home/yongjer/%E7%A8%8B%E5%BC%8F/object%20detection/main.ipynb#W6sZmlsZQ%3D%3D?line=16'>17</a> result = pipe(image)
<a href='vscode-notebook-cell:/home/yongjer/%E7%A8%8B%E5%BC%8F/object%20detection/main.ipynb#W6sZmlsZQ%3D%3D?line=17'>18</a> print(result)
File ~/miniforge3/envs/od/lib/python3.11/site-packages/transformers/pipelines/__init__.py:834, in pipeline(task, model, config, tokenizer, feature_extractor, image_processor, framework, revision, use_fast, token, device, device_map, torch_dtype, trust_remote_code, model_kwargs, pipeline_class, **kwargs)
832 if isinstance(model, str) or framework is None:
833 model_classes = {\"tf\": targeted_task[\"tf\"], \"pt\": targeted_task[\"pt\"]}
--> 834 framework, model = infer_framework_load_model(
835 model,
836 model_classes=model_classes,
837 config=config,
838 framework=framework,
839 task=task,
840 **hub_kwargs,
841 **model_kwargs,
842 )
844 model_config = model.config
845 hub_kwargs[\"_commit_hash\"] = model.config._commit_hash
File ~/miniforge3/envs/od/lib/python3.11/site-packages/transformers/pipelines/base.py:269, in infer_framework_load_model(model, config, model_classes, task, framework, **model_kwargs)
263 logger.warning(
264 \"Model might be a PyTorch model (ending with `.bin`) but PyTorch is not available. \"
265 \"Trying to load the model with Tensorflow.\"
266 )
268 try:
--> 269 model = model_class.from_pretrained(model, **kwargs)
270 if hasattr(model, \"eval\"):
271 model = model.eval()
File ~/miniforge3/envs/od/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py:565, in _BaseAutoModelClass.from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
563 elif type(config) in cls._model_mapping.keys():
564 model_class = _get_model_class(config, cls._model_mapping)
--> 565 return model_class.from_pretrained(
566 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
567 )
568 raise ValueError(
569 f\"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\
\"
570 f\"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}.\"
571 )
File ~/miniforge3/envs/od/lib/python3.11/site-packages/transformers/modeling_utils.py:3085, in PreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, config, cache_dir, ignore_mismatched_sizes, force_download, local_files_only, token, revision, use_safetensors, *model_args, **kwargs)
3082 config = cls._check_and_enable_flash_attn_2(config, torch_dtype=torch_dtype, device_map=device_map)
3084 with ContextManagers(init_contexts):
-> 3085 model = cls(config, *model_args, **model_kwargs)
3087 # Check first if we are `from_pt`
3088 if use_keep_in_fp32_modules:
File ~/miniforge3/envs/od/lib/python3.11/site-packages/transformers/models/detr/modeling_detr.py:1498, in DetrForObjectDetection.__init__(self, config)
1495 super().__init__(config)
1497 # DETR encoder-decoder model
-> 1498 self.model = DetrModel(config)
1500 # Object detection heads
1501 self.class_labels_classifier = nn.Linear(
1502 config.d_model, config.num_labels + 1
1503 ) # We add one for the \"no object\" class
File ~/miniforge3/envs/od/lib/python3.11/site-packages/transformers/models/detr/modeling_detr.py:1330, in DetrModel.__init__(self, config)
1327 super().__init__(config)
1329 # Create backbone + positional encoding
-> 1330 backbone = DetrConvEncoder(config)
1331 object_queries = build_position_encoding(config)
1332 self.backbone = DetrConvModel(backbone, object_queries)
File ~/miniforge3/envs/od/lib/python3.11/site-packages/transformers/models/detr/modeling_detr.py:361, in DetrConvEncoder.__init__(self, config)
359 # replace batch norm by frozen batch norm
360 with torch.no_grad():
--> 361 replace_batch_norm(backbone)
362 self.model = backbone
363 self.intermediate_channel_sizes = (
364 self.model.feature_info.channels() if config.use_timm_backbone else self.model.channels
365 )
File ~/miniforge3/envs/od/lib/python3.11/site-packages/transformers/models/detr/modeling_detr.py:319, in replace_batch_norm(model)
316 if isinstance(module, nn.BatchNorm2d):
317 new_module = DetrFrozenBatchNorm2d(module.num_features)
--> 319 new_module.weight.data.copy_(module.weight)
320 new_module.bias.data.copy_(module.bias)
321 new_module.running_mean.data.copy_(module.running_mean)
NotImplementedError: Cannot copy out of meta tensor; no data!"
}
```
### Expected behavior
when set device=0 rather than device_map = "auto", it works | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26700/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26699 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26699/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26699/comments | https://api.github.com/repos/huggingface/transformers/issues/26699/events | https://github.com/huggingface/transformers/pull/26699 | 1,933,478,052 | PR_kwDOCUB6oc5cSCDg | 26,699 | [ASR Pipe] Fix num frames for bs > 1 | {
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26699). All of your documentation changes will be reflected on that endpoint.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Any plan for a fix for this?",
"Reopened this @sanchit-gandhi! (not sure if we still want to work on this, if not, then feel free to close)",
"This would enable word level timestamps for whisper with batched inference right? Would love to see this, or another workaround if that's possible ",
"Yes! @thomasmol 🤗 ",
"This would be super valuable to so many applications that require word level transcriptions",
"Any updates on this feature? It would be a massive advantage",
"On testing via T4 GPU, while this PR does fix the issue, the GPU consumption ramps up quite significantly.\r\nWhile processing a 3 min 19 audio file, without word timestamps, it is able to complete the process with 24 batch size (default) under 30 seconds, with GPU consumption ~9.5 GiB.\r\nBut, with the word timestamps, it causes GPU OOM, with 24 batch size, the best I was able to do with 16 GB T4 GPU was batch size as 2, which took around 2.5 minutes, while still GPU consumption with this batch size going to around ~10 GiB.",
"We need this to be fixed for insanely fast whisper, is there any eta on the pr?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"#28114 probably addressed this ",
"Closed by https://github.com/huggingface/transformers/pull/28114."
] | 1,696 | 1,705 | 1,705 | CONTRIBUTOR | null | # What does this PR do?
Handles the case when the pipeline is batched and we want to return word-level timestamps. Here, we have a batch (list) of strides, which we slice to get the first stride value. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26699/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26699/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26699",
"html_url": "https://github.com/huggingface/transformers/pull/26699",
"diff_url": "https://github.com/huggingface/transformers/pull/26699.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26699.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26698 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26698/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26698/comments | https://api.github.com/repos/huggingface/transformers/issues/26698/events | https://github.com/huggingface/transformers/issues/26698 | 1,933,356,304 | I_kwDOCUB6oc5zPK0Q | 26,698 | itm_score output for BLIP2 | {
"login": "fferroni",
"id": 16327442,
"node_id": "MDQ6VXNlcjE2MzI3NDQy",
"avatar_url": "https://avatars.githubusercontent.com/u/16327442?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fferroni",
"html_url": "https://github.com/fferroni",
"followers_url": "https://api.github.com/users/fferroni/followers",
"following_url": "https://api.github.com/users/fferroni/following{/other_user}",
"gists_url": "https://api.github.com/users/fferroni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fferroni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fferroni/subscriptions",
"organizations_url": "https://api.github.com/users/fferroni/orgs",
"repos_url": "https://api.github.com/users/fferroni/repos",
"events_url": "https://api.github.com/users/fferroni/events{/privacy}",
"received_events_url": "https://api.github.com/users/fferroni/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @fferroni , maybe this PR https://github.com/huggingface/transformers/pull/25612 could be related",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,704 | 1,704 | NONE | null | ### Feature request
Would it be possible to add outputting ITM/ITC scores for BLIP2? It is currently supported for BLIP v1.
### Motivation
LAVIS already contains the image-text matching capability here
https://github.com/salesforce/LAVIS/blob/3446bac20c5646d35ae383ebe6d13cec4f8b00cb/lavis/models/blip2_models/blip2_image_text_matching.py
### Your contribution
Given some pointers on which objects/layers correspond to the LAVIS version vs `transformers` version, I would also try adding this. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26698/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26697 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26697/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26697/comments | https://api.github.com/repos/huggingface/transformers/issues/26697/events | https://github.com/huggingface/transformers/issues/26697 | 1,933,354,239 | I_kwDOCUB6oc5zPKT_ | 26,697 | use_flash_attention_2=True for Llama2 breaks generation | {
"login": "markovalexander",
"id": 22663468,
"node_id": "MDQ6VXNlcjIyNjYzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/22663468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/markovalexander",
"html_url": "https://github.com/markovalexander",
"followers_url": "https://api.github.com/users/markovalexander/followers",
"following_url": "https://api.github.com/users/markovalexander/following{/other_user}",
"gists_url": "https://api.github.com/users/markovalexander/gists{/gist_id}",
"starred_url": "https://api.github.com/users/markovalexander/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/markovalexander/subscriptions",
"organizations_url": "https://api.github.com/users/markovalexander/orgs",
"repos_url": "https://api.github.com/users/markovalexander/repos",
"events_url": "https://api.github.com/users/markovalexander/events{/privacy}",
"received_events_url": "https://api.github.com/users/markovalexander/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "younesbelkada",
"id": 49240599,
"node_id": "MDQ6VXNlcjQ5MjQwNTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/49240599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/younesbelkada",
"html_url": "https://github.com/younesbelkada",
"followers_url": "https://api.github.com/users/younesbelkada/followers",
"following_url": "https://api.github.com/users/younesbelkada/following{/other_user}",
"gists_url": "https://api.github.com/users/younesbelkada/gists{/gist_id}",
"starred_url": "https://api.github.com/users/younesbelkada/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/younesbelkada/subscriptions",
"organizations_url": "https://api.github.com/users/younesbelkada/orgs",
"repos_url": "https://api.github.com/users/younesbelkada/repos",
"events_url": "https://api.github.com/users/younesbelkada/events{/privacy}",
"received_events_url": "https://api.github.com/users/younesbelkada/received_events",
"type": "User",
"site_admin": false
}
] | [
"It generates \"nobody\" on any prompt actually",
"Hi @markovalexander \r\nI did not managed to repro on an A100, the latest FA-2 release and HF transformers main branch:\r\n\r\n```python\r\nimport torch\r\nfrom transformers import AutoModelForCausalLM, AutoTokenizer\r\n\r\ncheckpoint = \"meta-llama/Llama-2-7b-chat-hf\"\r\ndevice = \"cuda\" # for GPU usage or \"cpu\" for CPU usage\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(checkpoint)\r\nmodel = AutoModelForCausalLM.from_pretrained(checkpoint, torch_dtype=torch.float16, use_flash_attention_2=True, low_cpu_mem_usage=True).to(device)\r\n\r\ninputs = tokenizer.encode(\"Hello how are you?\", return_tensors=\"pt\").to(device)\r\noutputs = model.generate(inputs, max_new_tokens=4, do_sample=False)\r\nprint(tokenizer.decode(outputs[0]))\r\n```\r\n\r\nI get\r\n\r\n```bash\r\n<s> Hello how are you? I'm doing\r\n```\r\n\r\nWhat hardware are you using?",
"Hi @younesbelkada , thanks for looking into the issue. I am using A100, nvidia-smi first line is:\r\n\r\n`| NVIDIA-SMI 535.104.05 Driver Version: 535.104.05 CUDA Version: 12.2 |`\r\n\r\nand \r\n\r\n```\r\n❯ pip show flash_attn\r\nName: flash-attn\r\nVersion: 2.0.0.post1\r\n```",
"Thanks, I am using\r\n\r\n```bash\r\n> pip show flash_attn\r\nName: flash-attn\r\nVersion: 2.3.1.post1\r\n```\r\n\r\nCan you try to use the latest FA package? that might be the culprit. `pip install -U flash-attn --no-build-isolationn`",
"Updating flash attention helped, thank you :) ",
"Awesome, thanks @markovalexander !"
] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-5.15.0-1042-oracle-x86_64-with-glibc2.29
- Python version: 3.8.10
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: 0.21.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
text models: @ArthurZucker and @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Using flash attention 2 completely breaks generation.
<img width="651" alt="image" src="https://github.com/huggingface/transformers/assets/22663468/22384c4a-0aa4-4a51-acc7-805379a2b72f">
### Expected behavior
Generations match | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26697/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26696 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26696/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26696/comments | https://api.github.com/repos/huggingface/transformers/issues/26696/events | https://github.com/huggingface/transformers/issues/26696 | 1,933,336,527 | I_kwDOCUB6oc5zPF_P | 26,696 | MADLAD-400 MT Models | {
"login": "noise-field",
"id": 14188757,
"node_id": "MDQ6VXNlcjE0MTg4NzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/14188757?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/noise-field",
"html_url": "https://github.com/noise-field",
"followers_url": "https://api.github.com/users/noise-field/followers",
"following_url": "https://api.github.com/users/noise-field/following{/other_user}",
"gists_url": "https://api.github.com/users/noise-field/gists{/gist_id}",
"starred_url": "https://api.github.com/users/noise-field/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/noise-field/subscriptions",
"organizations_url": "https://api.github.com/users/noise-field/orgs",
"repos_url": "https://api.github.com/users/noise-field/repos",
"events_url": "https://api.github.com/users/noise-field/events{/privacy}",
"received_events_url": "https://api.github.com/users/noise-field/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"@noise-field Can you pls share the link of model(MADLAD-400) implementation?\r\nI couldn't find it yet.\r\nIt would be appreciable.\r\nThanks",
"@yugaljain1999 as far as I understand, these are t5x (FLAX) models configured using `.gin` files (https://github.com/google/gin-config), e.g. for 3B model: https://console.cloud.google.com/storage/browser/_details/madlad-400-checkpoints/checkpoints/3b-mt/3b-mt.gin",
"Okay\r\n@noise-field But how we can load these models and get translated outputs?\r\nCan you share colab notebook or something to show how we can run these models?\r\nThanks",
"You should be able to use the conversion scripts in `transformers` like [this one](https://github.com/huggingface/transformers/blob/ad08137e473e00702fc3088a119da7026e1cb025/src/transformers/models/t5/convert_t5x_checkpoint_to_pytorch.py#L16)",
"There is a WIP notebook for this but output is nonsensical: https://colab.research.google.com/drive/1rZ2NRyl2zwmg0sQ2Wi-uZZF48iVYulTC",
"Was implemented in https://github.com/huggingface/candle/pull/1285 by @jbochi ",
"Thanks for tagging me, @noise-field .\r\n\r\nI have published the safetensors weights in this [collection](https://huggingface.co/collections/jbochi/madlad-400-65491e6a78726cac9a4b84b7).\r\n\r\nAll machine translation models work with `T5ForConditionalGeneration`. However, the [language model](https://huggingface.co/jbochi/madlad400-8b-lm) is a decoder-only T5, which is currently not supported:\r\n\r\n- [open issue](https://github.com/huggingface/transformers/issues/26647) \r\n- [Some observations](https://huggingface.co/jbochi/madlad400-3b-mt/discussions/2#654b6d28eeb563c6c86a3536)",
"Hey @jbochi would you be down to transfer the models you converted to the google organization? I think we can also open a PR to transformers similar #21929 to mention it in the doc, create a collection on the hub and not forget to mention that you converted the checkpoints! \r\nRegarding the T5Decoder only model, might make sense to add it in the PR to support these models. If the modelling changes are not too involved could make sense to have it in T5 (Like smae code as T5Encoder). \r\n\r\nWDYT? ",
"Hey @ArthurZucker . Sure, I can transfer the models if they are willing to maintain it. Opening the PR also sounds good to me. I think both things will help people find the model.\r\n\r\nI'll open the PR shortly.\r\n\r\nRegarding the decoder only model, I gave it a shot [here](https://huggingface.co/jbochi/madlad400-8b-lm/tree/main/decoder_only_t5).\r\n\r\nIt's hard to debug why it doesn't quite work without running the original code and comparing the output tensors.\r\n\r\nSome things I changed that are different from T5:\r\n- [Parallel Layers](https://paperswithcode.com/method/parallel-layers)\r\n- [Multi-Query Attention](https://paperswithcode.com/method/multi-query-attention)\r\n\r\nEdit:\r\n\r\nA few additional differences:\r\n\r\n- RoPE (`use_rotary_embedding=True`)\r\n- MLP has swish instead of gelu activation\r\n- bidrectional attention in the decoder\r\n- eos token = `3` (`\\n`)\r\n\r\nI believe it is exactly the same architecture as [PaLM 8B](https://arxiv.org/pdf/2204.02311v5.pdf)\r\n",
"Parallelism is now supported with `accelerate` rather than in transformers! ",
"Sorry, but by [Parallel Layers](https://paperswithcode.com/method/parallel-layers) here, I don't mean model parallelism.\r\n\r\nIt's a variation of the transformer architecture. Instead of applying the feedforward dense layer after attention, the attention layer and the FF layer are called in parallel and added up. \r\n\r\nHere's their implementation: https://github.com/google/flaxformer/blob/ea17eb012a1d340ddff017b7a534c2162aaec34c/flaxformer/architectures/t5/t5_architecture.py#L534-L578",
"I opened the PR. Please take a look :)",
"Great thanks 😉 ",
"@jbochi When I was trying MADLAD-400 base version, it's inference speed with batch_size=1 is much slower in both original and quantized version. So is there any way we can lower the latency?\r\n\r\nAnd is it possible to run MADLAD-400 inference in batch size>1 because in my 16GB GPU memory , that 11 GB model throws following error in case of batch_size>1, not with batch_size=1\r\n\r\n`[RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle)` with GPU only](https://stackoverflow.com/questions/66600362/runtimeerror-cuda-error-cublas-status-execution-failed-when-calling-cublassge)`\r\n\r\nSo may I know should we run MADLAD-3B model on high cost Nvidia GPUs like A30 to run it with batch_size>1 and hence reduce inference latency?\r\n\r\nYour response would be really appreciable on this.\r\n\r\nThanks",
"Hey @yugaljain1999 . I'm not an expert, but I believe quantization should allow larger batch sizes, but will not help with the speed.\r\n\r\nLoading the model in bfloat16 may help you run larger batch sizes. I just tried this on a Google Colab with a V100 with 16GB.\r\n\r\n```\r\nimport torch\r\nfrom transformers import T5Tokenizer, T5ForConditionalGeneration\r\n\r\ntokenizer = T5Tokenizer.from_pretrained(\"jbochi/madlad400-8b-lm\")\r\n\r\nmodel = T5ForConditionalGeneration.from_pretrained(\r\n \"jbochi/madlad400-3b-mt\", device_map=\"auto\", torch_dtype=torch.bfloat16)\r\n\r\ntext = [\"<2pt> I love pizza\", \"<2es> I love pizza\", \"<2it> I love pizza\"]\r\ninput_ids = tokenizer(text, padding=True, return_tensors=\"pt\").input_ids\r\n\r\n%%time\r\noutputs = model.generate(\r\n input_ids=input_ids.to('cuda')\r\n)\r\nfor i in range(len(outputs)):\r\n print(tokenizer.decode(outputs[i], skip_special_tokens=True))\r\n```\r\n\r\nThis prints:\r\n\r\n> Eu amo pizza\r\n> Me encanta la pizza\r\n> Amo la pizza\r\n> CPU times: user 4.18 s, sys: 1.19 s, total: 5.36 s\r\n> Wall time: 7.03 s\r\n\r\nIt only used 6.5GB at peak.\r\n",
"@jbochi Thanks for your response. Just one query, do we need to have same length of each input tensor in batch of input tensors?",
"Each input sentence can have a different number of tokens. You just need to\r\npass padding=True to the tokenizer, and it will append trailing padding\r\ntokens where needed.\r\n\r\nOn Wed, Nov 29, 2023, 4:58 AM Yugal Jain ***@***.***> wrote:\r\n\r\n> @jbochi <https://github.com/jbochi> Thanks for your response. Just one\r\n> query, do we need to have same length of each input tensor in batch of\r\n> input tensors?\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/26696#issuecomment-1831573254>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AACHO2F5JLV6CGQZYVRML3LYG4BMNAVCNFSM6AAAAAA5Y7RYEWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZRGU3TGMRVGQ>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n",
"@jbochi Thanks for your response. Btw do we need to set max_new_tokens in generate() method to 512 or 1024, or is it not necessary to set this parameter?",
"That is only useful if you have long inputs and don't want to translate the\r\nwhole thing.\r\n\r\nOn Thu, Dec 7, 2023, 2:09 AM Yugal Jain ***@***.***> wrote:\r\n\r\n> @jbochi <https://github.com/jbochi> Thanks for your response. Btw do we\r\n> need to set max_new_tokens in generate() method to 512 or 1024, or is it\r\n> not necessary to set this parameter?\r\n>\r\n> —\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/transformers/issues/26696#issuecomment-1844790765>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AACHO2A7PGUHETHM4SI4MMDYIFTTLAVCNFSM6AAAAAA5Y7RYEWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBUG44TANZWGU>\r\n> .\r\n> You are receiving this because you were mentioned.Message ID:\r\n> ***@***.***>\r\n>\r\n"
] | 1,696 | 1,701 | 1,699 | CONTRIBUTOR | null | ### Model description
From the paper:
> We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
Paper: https://arxiv.org/abs/2309.04662
Github (with checkpoint links): https://github.com/google-research/google-research/tree/master/madlad_400 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26696/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/26696/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26695 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26695/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26695/comments | https://api.github.com/repos/huggingface/transformers/issues/26695/events | https://github.com/huggingface/transformers/issues/26695 | 1,933,259,435 | I_kwDOCUB6oc5zOzKr | 26,695 | AttributeError: 'BitsAndBytesConfig' object has no attribute 'get_loading_attributes' | {
"login": "ZER01NE44",
"id": 142371641,
"node_id": "U_kgDOCHxrOQ",
"avatar_url": "https://avatars.githubusercontent.com/u/142371641?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZER01NE44",
"html_url": "https://github.com/ZER01NE44",
"followers_url": "https://api.github.com/users/ZER01NE44/followers",
"following_url": "https://api.github.com/users/ZER01NE44/following{/other_user}",
"gists_url": "https://api.github.com/users/ZER01NE44/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZER01NE44/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZER01NE44/subscriptions",
"organizations_url": "https://api.github.com/users/ZER01NE44/orgs",
"repos_url": "https://api.github.com/users/ZER01NE44/repos",
"events_url": "https://api.github.com/users/ZER01NE44/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZER01NE44/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"+1",
"cc @younesbelkada ",
"Hi @ZER01NE44 \r\nThanks for the issue, you seem to be using auto-train - and I see many issues here\r\n1- you are passing the flag `--use_int4` on a GPTQ model. `--use-int4` will try to convert the model in 4bit using bitsandbytes. You cannot convert a GPTQ model with bitsandbytes hence the error\r\n2- It seems that you are using a windows machine, I think bitsandbytes and windows are not compatible\r\nTo solve your issue you can try out two things\r\n1- remove `--use_int4` flag\r\n2- use another model than `TheBloke/Llama-2-13B-chat-GPTQ` e.g. `meta-llama/Llama-2-13b-chat-hf`",
"Thank you so much, @younesbelkada \r\n\r\nI removed --use_int4 and proceeded as you said, but i got an below error.\r\n\r\n```shell\r\nRuntimeError: element 0 of tensors does not require grad and does not have a grad_fn\r\n``` \r\n\r\nIf bitsandbytes are not compatible with Windows, can't I use GPTQ even if I fix this error?\r\nAlso, do you know how to fix this error?",
"Hi @ZER01NE44 \r\nThank you, can you share the full traceback of the error? Also the error might be specific to auto-train, can you also open a ticket there and tag me together with @abhishekkrthakur ? 🙏 ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"I am also facing the same issue.\r\nlog:\r\n(training_llama) ubuntu@ip-172-31-10-111:~/llma2/training/qlora$ sh scripts/finetune.sh \r\n\r\n===================================BUG REPORT===================================\r\nWelcome to bitsandbytes. For bug reports, please run\r\n\r\npython -m bitsandbytes\r\n\r\n and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues\r\n================================================================================\r\nbin /home/ubuntu/miniconda3/envs/training_llama/lib/python3.11/site-packages/bitsandbytes-0.39.0-py3.11.egg/bitsandbytes/libbitsandbytes_cuda117.so\r\n/home/ubuntu/miniconda3/envs/training_llama/lib/python3.11/site-packages/bitsandbytes-0.39.0-py3.11.egg/bitsandbytes/cuda_setup/main.py:149: UserWarning: /home/ubuntu/miniconda3/envs/training_llama did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths...\r\n warn(msg)\r\nCUDA SETUP: CUDA runtime path found: /usr/local/cuda-11.7/lib64/libcudart.so.11.0\r\nCUDA SETUP: Highest compute capability among GPUs detected: 8.6\r\nCUDA SETUP: Detected CUDA version 117\r\nCUDA SETUP: Loading binary /home/ubuntu/miniconda3/envs/training_llama/lib/python3.11/site-packages/bitsandbytes-0.39.0-py3.11.egg/bitsandbytes/libbitsandbytes_cuda117.so...\r\nloading base model /home/ubuntu/llma2/text-generation-webui/models/TheBloke_Llama-2-70B-Chat-GPTQ/...\r\nTraceback (most recent call last):\r\n File \"/home/ubuntu/llma2/training/qlora/qlora.py\", line 769, in <module>\r\n train()\r\n File \"/home/ubuntu/llma2/training/qlora/qlora.py\", line 600, in train\r\n model = get_accelerate_model(args, checkpoint_dir)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/llma2/training/qlora/qlora.py\", line 264, in get_accelerate_model\r\n model = AutoModelForCausalLM.from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/miniconda3/envs/training_llama/lib/python3.11/site-packages/transformers/models/auto/auto_factory.py\", line 566, in from_pretrained\r\n return model_class.from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ubuntu/miniconda3/envs/training_llama/lib/python3.11/site-packages/transformers/modeling_utils.py\", line 2786, in from_pretrained\r\n loading_attr_dict = quantization_config.get_loading_attributes()\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nAttributeError: 'BitsAndBytesConfig' object has no attribute 'get_loading_attributes'\r\n(training_llama) ubuntu@ip-172-31-10-111:~/llma2/training/qlora$ \r\n",
"Hey @qburst-fidha, it is not possible to quantize an already quantized model (GPTQ model). I suggest you to use this model instead [Llama-2-70b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf). ",
"> Hey @qburst-fidha, it is not possible to quantize an already quantized model (GPTQ model). I suggest you to use this model instead [Llama-2-70b-chat-hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf).\r\n\r\nIs there any other way to finetune the GPTQ model",
"Here is a [gist](https://gist.github.com/SunMarc/dcdb499ac16d355a8f265aa497645996) on how to finetune the GPTQ model !"
] | 1,696 | 1,700 | 1,700 | NONE | null | ### System Info
Platform: Windows 11
Python Version: 3.10.11
PyTorch Version: 2.0.1+cu118
accelerate: 0.23.0
bitsandbytes: 0.39.1
huggingface-hub: 0.17.3
tokenizers: 0.14.1
transformers: 4.35.0.dev0
### Who can help?
@SUNMARC
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
1. autotrain llm --train --project_name test-llama2-finetune --model TheBloke/Llama-2-13B-chat-GPTQ --data_path C:\Users\JeongHyoengyo\Desktop\data\ --text_column text --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048
While Fine-tuning **TheBloke/Llama-2-13B-chat-GPTQ**, I ran into this issue.
```shell
(Fine-tuning) C:\Users\JeongHyoengyo>autotrain llm --train --project_name test-llama2-finetune --model TheBloke/Llama-2-13B-chat-GPTQ --data_path C:\Users\JeongHyoengyo\Desktop\data\ --text_column text --use_peft --use_int4 --learning_rate 2e-4 --train_batch_size 2 --num_train_epochs 3 --trainer sft --model_max_length 2048
bin C:\Users\JeongHyoengyo\anaconda3\envs\Fine-tuning\lib\site-packages\bitsandbytes\libbitsandbytes_cuda118.dll
> INFO Running LLM
> INFO Params: Namespace(version=False, train=True, deploy=False, inference=False, data_path='C:\\Users\\JeongHyoengyo\\Desktop\\data\\', train_split='train', valid_split=None, text_column='text', model='TheBloke/Llama-2-13B-chat-GPTQ', learning_rate=0.0002, num_train_epochs=3, train_batch_size=2, warmup_ratio=0.1, gradient_accumulation_steps=1, optimizer='adamw_torch', scheduler='linear', weight_decay=0.0, max_grad_norm=1.0, seed=42, add_eos_token=False, block_size=-1, use_peft=True, lora_r=16, lora_alpha=32, lora_dropout=0.05, logging_steps=-1, project_name='test-llama2-finetune', evaluation_strategy='epoch', save_total_limit=1, save_strategy='epoch', auto_find_batch_size=False, fp16=False, push_to_hub=False, use_int8=False, model_max_length=2048, repo_id=None, use_int4=True, trainer='sft', target_modules=None, merge_adapter=False, token=None, backend='default', username=None, use_flash_attention_2=False, func=<function run_llm_command_factory at 0x0000025FC810FC70>)
Using pad_token, but it is not set yet.
> ERROR train has failed due to an exception:
> ERROR Traceback (most recent call last):
File "C:\Users\JeongHyoengyo\anaconda3\envs\Fine-tuning\lib\site-packages\autotrain\utils.py", line 280, in wrapper
return func(*args, **kwargs)
File "C:\Users\JeongHyoengyo\anaconda3\envs\Fine-tuning\lib\site-packages\autotrain\trainers\clm\__main__.py", line 124, in train
model = AutoModelForCausalLM.from_pretrained(
File "C:\Users\JeongHyoengyo\anaconda3\envs\Fine-tuning\lib\site-packages\transformers\models\auto\auto_factory.py", line 565, in from_pretrained
return model_class.from_pretrained(
File "C:\Users\JeongHyoengyo\anaconda3\envs\Fine-tuning\lib\site-packages\transformers\modeling_utils.py", line 2690, in from_pretrained
loading_attr_dict = quantization_config.get_loading_attributes()
AttributeError: 'BitsAndBytesConfig' object has no attribute 'get_loading_attributes'
```
I've already changed the Transformers model from 4.30.0 to 4.35.0, but the same problem appears.
Also, it was the same when i downloaded bitsandbytes from https://github.com/TimDettmers/bitsandbytes.
Current bitsandbytes version is 0.39.1 because 0.41.1 fail below code.
```python
python -m bitsandbytes
```
I don't know how to fix it. Please help me.
### Expected behavior
Fine-tuning sucessfully. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26695/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26694 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26694/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26694/comments | https://api.github.com/repos/huggingface/transformers/issues/26694/events | https://github.com/huggingface/transformers/pull/26694 | 1,933,235,847 | PR_kwDOCUB6oc5cRM81 | 26,694 | Add links to the Hub | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26694). All of your documentation changes will be reflected on that endpoint.",
"I think with the badges (but rename it to All model _checkpoints_ instead of All model pages) it is more visible than the text which doesn't stand out as much. ",
"Great idea, however is this currently hardcoded? Would be great to add it to the CookieCutter so that new models have this as well. Also more a fan of the badge for visibility.",
"Begone bot, I still want to get to this",
"I updated everything to use the tags.\r\n\r\nIf this looks good to you all I'll add it to cookie cutter + add a script to verify it's indeed in the docs.\r\n\r\nThat's the script I used to add it everywhere:\r\n\r\n```py\r\nimport os\r\nfrom pathlib import Path\r\n\r\n\r\ndef get_div_text(model_type):\r\n return f\"\"\"<div class=\"flex flex-wrap space-x-1\">\r\n<a href=\"https://huggingface.co/models?filter={model_type}\">\r\n<img alt=\"Models\" src=\"https://img.shields.io/badge/All_model_pages-{model_type}-blueviolet\">\r\n</div>\r\n\r\n\"\"\"\r\n\r\n\r\npath_to_docs = Path('/path_t_transformers/transformers/docs/source/en/model_doc')\r\nmodel_docs = os.listdir(path_to_docs)\r\nprint(model_docs)\r\n\r\nfor model_doc in model_docs:\r\n with open(path_to_docs / model_doc, 'r+') as f:\r\n doc = f.read()\r\n\r\n div = doc.split('#')[1]\r\n if '<div class=\"flex flex-wrap space-x-1\">' in div:\r\n print('✅', model_doc)\r\n else:\r\n print('❌', model_doc)\r\n new_doc = doc.split('#')\r\n new_doc[1] = new_doc[1] + get_div_text(model_doc[:-3])\r\n\r\n with open(path_to_docs / model_doc, 'w') as f:\r\n f.write('#'.join(new_doc))\r\n```",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,704 | 1,704 | MEMBER | null | Adds a few additional links to the Hub. If there are ideas to do this even deeper, would love to incorporate them in this PR.
I would love loved to share initial collections as well but I have to check if I can embed collection visualisation in the doc page + it's not relevant for quite a few architecures. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26694/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26694/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26694",
"html_url": "https://github.com/huggingface/transformers/pull/26694",
"diff_url": "https://github.com/huggingface/transformers/pull/26694.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26694.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26693 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26693/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26693/comments | https://api.github.com/repos/huggingface/transformers/issues/26693/events | https://github.com/huggingface/transformers/pull/26693 | 1,933,111,575 | PR_kwDOCUB6oc5cQxKV | 26,693 | [docstring] Fix docstring for `ASTFeatureExtractor` | {
"login": "imsoumya18",
"id": 50456734,
"node_id": "MDQ6VXNlcjUwNDU2NzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/50456734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imsoumya18",
"html_url": "https://github.com/imsoumya18",
"followers_url": "https://api.github.com/users/imsoumya18/followers",
"following_url": "https://api.github.com/users/imsoumya18/following{/other_user}",
"gists_url": "https://api.github.com/users/imsoumya18/gists{/gist_id}",
"starred_url": "https://api.github.com/users/imsoumya18/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/imsoumya18/subscriptions",
"organizations_url": "https://api.github.com/users/imsoumya18/orgs",
"repos_url": "https://api.github.com/users/imsoumya18/repos",
"events_url": "https://api.github.com/users/imsoumya18/events{/privacy}",
"received_events_url": "https://api.github.com/users/imsoumya18/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh Why am I getting this error?\r\n\r\n",
"Could just try `pip install tensorrt` - may not be completely necessary for your docstring change though.\r\n\r\nAlso, I would try `make fix-copies` in your next commit as it looks like this was what made CI fail",
"It's just a warning. No big deal.\r\n\r\nAnd please follow #26638: you need to run a command to perform some fix.",
"@ydshieh now have a look please. Just getting the same warning as before. Otherwise all are ok",
"\r\n",
"You haven't done any fix but just remove one entry in the list to ignore. You should follow the issue page of this event.",
"Please run `python3 utils/check_docstrings.py --fix_and_overwrite` (or python)",
"Hi @imsoumya18 Do you encounter some issues when running `python3 utils/check_docstrings.py --fix_and_overwrite`? ",
"> Hi @imsoumya18 Do you encounter some issues when running `python3 utils/check_docstrings.py --fix_and_overwrite`?\r\n\r\nYes, I am continuously getting this error\r\n\r\n",
"I see, but I don't think it is an error, but just warning.\r\n\r\nAfter you run that command, no file has being changed by the script? If so, I will take a look (but in this case, probably indeed an environment issue).",
"> I see, but I don't think it is an error, but just warning.\r\n> \r\n> After you run that command, no file has being changed by the script? If so, I will take a look (but in this case, probably indeed an environment issue).\r\n\r\nAfter it, no file was changed. But, I think (I maybe wrong) it's an error. Maybe a problem of the env. Also, I gave up hope and completely deleted the fork and repo whole. Let me fork and setup the env and check if the issue is solved. I will then open a new PR or reopen this one. Let me try again by setting up from start.",
"@imsoumya18 No worry, thank you for the effort. Environment is indeed frustrated, but maybe a new fresh python virtual environment is easy way. Let me know if you need any help. Look forward to your contribution 🔥 "
] | 1,696 | 1,697 | 1,696 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26638
## Who can review?
@ydshieh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26693/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26693",
"html_url": "https://github.com/huggingface/transformers/pull/26693",
"diff_url": "https://github.com/huggingface/transformers/pull/26693.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26693.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26692 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26692/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26692/comments | https://api.github.com/repos/huggingface/transformers/issues/26692/events | https://github.com/huggingface/transformers/pull/26692 | 1,933,046,830 | PR_kwDOCUB6oc5cQite | 26,692 | Fix stale bot | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,696 | 1,696 | 1,696 | MEMBER | null | Fixes the stale bot | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26692/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26692/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26692",
"html_url": "https://github.com/huggingface/transformers/pull/26692",
"diff_url": "https://github.com/huggingface/transformers/pull/26692.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26692.patch",
"merged_at": 1696862398000
} |
https://api.github.com/repos/huggingface/transformers/issues/26691 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26691/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26691/comments | https://api.github.com/repos/huggingface/transformers/issues/26691/events | https://github.com/huggingface/transformers/pull/26691 | 1,933,038,488 | PR_kwDOCUB6oc5cQg49 | 26,691 | Fix docstrings for vanilla clip | {
"login": "isaac-chung",
"id": 48971969,
"node_id": "MDQ6VXNlcjQ4OTcxOTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/48971969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isaac-chung",
"html_url": "https://github.com/isaac-chung",
"followers_url": "https://api.github.com/users/isaac-chung/followers",
"following_url": "https://api.github.com/users/isaac-chung/following{/other_user}",
"gists_url": "https://api.github.com/users/isaac-chung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isaac-chung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isaac-chung/subscriptions",
"organizations_url": "https://api.github.com/users/isaac-chung/orgs",
"repos_url": "https://api.github.com/users/isaac-chung/repos",
"events_url": "https://api.github.com/users/isaac-chung/events{/privacy}",
"received_events_url": "https://api.github.com/users/isaac-chung/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh ready for ✅ thanks!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26691). All of your documentation changes will be reflected on that endpoint."
] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue) https://github.com/huggingface/transformers/issues/26638 for `CLIPTokenizer`, `CLIPTokenizerFast`, `CLIPVisionConfig`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26691/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26691/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26691",
"html_url": "https://github.com/huggingface/transformers/pull/26691",
"diff_url": "https://github.com/huggingface/transformers/pull/26691.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26691.patch",
"merged_at": 1696865946000
} |
https://api.github.com/repos/huggingface/transformers/issues/26690 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26690/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26690/comments | https://api.github.com/repos/huggingface/transformers/issues/26690/events | https://github.com/huggingface/transformers/issues/26690 | 1,932,981,615 | I_kwDOCUB6oc5zNvVv | 26,690 | [RFC] Updating pipeline models | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
},
{
"id": 2796628563,
"node_id": "MDU6TGFiZWwyNzk2NjI4NTYz",
"url": "https://api.github.com/repos/huggingface/transformers/labels/WIP",
"name": "WIP",
"color": "234C99",
"default": false,
"description": "Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress"
}
] | open | false | null | [] | [
"maybe let's do it for one or two pipelines and we'll see if it breaks many things in the wild? (as long as the model outputs have the same \"shape\", i'm not sure it would break many things)",
"Nice idea! We have three audio pipelines in `transformers`:\r\n1. Text to Audio (aliased to Text to Speech)\r\n2. Audio Classification\r\n3. Automatic Speech Recognition\r\n\r\nText to audio is relatively new, so the default model used there is already up to date: https://huggingface.co/suno/bark-small\r\n\r\nLike text classification, audio classification requires a model specific to the classification tasks. E.g. for key-word spotting (KWS), you need to use an audio classification model trained on the KWS task. Similarly for language identification (LID), you need to use an audio classification model trained on the LID task. Therefore, it's probably not too useful changing the default model, since it's likely users pass a specific checkpoint for their task already.\r\n\r\nFor speech recognition, we should definitely consider updating from Wav2Vec2 to Whisper. There are 5 checkpoint sizes to select from, so there should be one compatible with the hardware/performance constraints you've outlined: https://huggingface.co/collections/openai/whisper-release-6501bba2cf999715fd953013",
"Don't stale! We're still planning this",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,704 | null | MEMBER | null | ### Feature request
We're considering updating the default models used in `transformers` pipelines. This has the potential to greatly improve performance, and get rid of limitations caused by the existing models, but it may also break backward compatibility. Many of the default models have not been changed since the tasks were first added in pipelines, so users might assume that they are 'permanent', and might be surprised by an update.
When updating pipelines, we would aim for the following objectives:
- The model should run on a base Colab instance (i.e. inference at max sequence length should fit inside 16GB VRAM)
- The default context length for text tasks should be long (at least 4k tokens where possible, ideally infinite with rope/alibi scaling)
- The performance should be as strong as reasonably possible within those two constraints
### Motivation
We have seen a number of [user issues](https://github.com/huggingface/transformers/issues/24392) prompted by the default pipeline models in `transformers` being outdated. For example, the default `sentiment-analysis` pipeline uses a finetuned `distilbert` model with a maximum sequence length of 512 tokens. You can see the full list of default models [here](https://github.com/huggingface/transformers/blob/main/src/transformers/pipelines/__init__.py#L155).
Performance on these tasks could be greatly improved with more modern models that have newer features like longer (potentially unlimited!) context lengths.
### Your contribution
I'll make the PR and potentially train new models for some of these tasks. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26690/reactions",
"total_count": 4,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26690/timeline | reopened | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26689 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26689/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26689/comments | https://api.github.com/repos/huggingface/transformers/issues/26689/events | https://github.com/huggingface/transformers/pull/26689 | 1,932,962,966 | PR_kwDOCUB6oc5cQQRL | 26,689 | Update run_image_classification.py | {
"login": "naveentnj",
"id": 57190478,
"node_id": "MDQ6VXNlcjU3MTkwNDc4",
"avatar_url": "https://avatars.githubusercontent.com/u/57190478?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/naveentnj",
"html_url": "https://github.com/naveentnj",
"followers_url": "https://api.github.com/users/naveentnj/followers",
"following_url": "https://api.github.com/users/naveentnj/following{/other_user}",
"gists_url": "https://api.github.com/users/naveentnj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/naveentnj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/naveentnj/subscriptions",
"organizations_url": "https://api.github.com/users/naveentnj/orgs",
"repos_url": "https://api.github.com/users/naveentnj/repos",
"events_url": "https://api.github.com/users/naveentnj/events{/privacy}",
"received_events_url": "https://api.github.com/users/naveentnj/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"What is this PR?",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,700 | 1,700 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26689/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26689",
"html_url": "https://github.com/huggingface/transformers/pull/26689",
"diff_url": "https://github.com/huggingface/transformers/pull/26689.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26689.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26688 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26688/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26688/comments | https://api.github.com/repos/huggingface/transformers/issues/26688/events | https://github.com/huggingface/transformers/pull/26688 | 1,932,954,767 | PR_kwDOCUB6oc5cQObs | 26,688 | added a min_eos_p parameter, defined 'SupressTokensLogitsProcessor' c… | {
"login": "anishsoni29",
"id": 96867765,
"node_id": "U_kgDOBcYVtQ",
"avatar_url": "https://avatars.githubusercontent.com/u/96867765?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anishsoni29",
"html_url": "https://github.com/anishsoni29",
"followers_url": "https://api.github.com/users/anishsoni29/followers",
"following_url": "https://api.github.com/users/anishsoni29/following{/other_user}",
"gists_url": "https://api.github.com/users/anishsoni29/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anishsoni29/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anishsoni29/subscriptions",
"organizations_url": "https://api.github.com/users/anishsoni29/orgs",
"repos_url": "https://api.github.com/users/anishsoni29/repos",
"events_url": "https://api.github.com/users/anishsoni29/events{/privacy}",
"received_events_url": "https://api.github.com/users/anishsoni29/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Committed this PR with the required changes",
"cc @ylacombe ",
"Hi @anishsoni29, thanks for your help here! however I think @isaac-chung already started working on it yesterday, as you can see [here](https://github.com/huggingface/transformers/issues/26672#issuecomment-1752105521). He even already proposed PR #26675.\r\n\r\nIt's really nice to see your keen motivation here, though! What would you think of contributing to other issues? We have a [docstrings sprint](https://github.com/huggingface/transformers/issues/26638) running this month, if you need any ideas !\r\n",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"#26675 merged!",
"So how do I set min_eos_p in the actual code? "
] | 1,696 | 1,702 | 1,699 | NONE | null | …lass and created a 'LogitsProcessorList' containing both a custom logits processor and a 'MinLengthLogitsProcessor'.
# What does this PR do?
1. It adds a custom logits processor (CustomLogitsProcessor) to the generate method.
2. Threshold for EOS Probability:
3. In this case the LPL list, it includes the original suppress_tokens_logits_processor, the custom custom_logits_processor, and a MinLengthLogitsProcessor to ensure that the generated output has a minimum length of 1 token.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
#26672
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26688/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26688",
"html_url": "https://github.com/huggingface/transformers/pull/26688",
"diff_url": "https://github.com/huggingface/transformers/pull/26688.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26688.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26687 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26687/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26687/comments | https://api.github.com/repos/huggingface/transformers/issues/26687/events | https://github.com/huggingface/transformers/issues/26687 | 1,932,768,994 | I_kwDOCUB6oc5zM7bi | 26,687 | pad_across_processes not compatible with PyTorch's nested_tensor | {
"login": "frankier",
"id": 299380,
"node_id": "MDQ6VXNlcjI5OTM4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankier",
"html_url": "https://github.com/frankier",
"followers_url": "https://api.github.com/users/frankier/followers",
"following_url": "https://api.github.com/users/frankier/following{/other_user}",
"gists_url": "https://api.github.com/users/frankier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankier/subscriptions",
"organizations_url": "https://api.github.com/users/frankier/orgs",
"repos_url": "https://api.github.com/users/frankier/repos",
"events_url": "https://api.github.com/users/frankier/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankier/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,696 | 1,700 | 1,700 | NONE | null | ### System Info
Latest transformers version
### Who can help?
@muellerzr @pacman100
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
To reproduce, take any model, and return a `nested_tensor` from its output, and then train it with `Trainer`. If you give direct be towards a self-contained example to start from, I will modify it to give the error. When pad_across_processes reaches the nested tensor it will try and do stuff like looking at the `len(...)`
### Expected behavior
The simplest fix is to just pass through nested_tensor
```python
def _pad_across_processes(self, tensor, *args, **kwargs):
if getattr(tensor, "is_nested", False):
return tensor
return super()._pad_across_processes(tensor, *args, **kwargs)
```
This should work, as long as the user processes things correctly themselves themselves in `preprocess_logits_for_metrics`.
There is not an obvious default since numpy doesn't have an exact equivalent. It would be possible to add some default behaviour, e.g. list of numpy arrays, or pad everything using `to_padded_tensor`. I would, however, favor simply bailing out when a nested_tensor reaches preprocess_logits_for_metrics, telling the user they must define custom behaviour. The reason for this is that nested_tensor is not yet stable. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26687/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26686 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26686/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26686/comments | https://api.github.com/repos/huggingface/transformers/issues/26686/events | https://github.com/huggingface/transformers/issues/26686 | 1,932,656,113 | I_kwDOCUB6oc5zMf3x | 26,686 | Trainer subclasses should be able to replace Accelerator / Optimizer | {
"login": "frankier",
"id": 299380,
"node_id": "MDQ6VXNlcjI5OTM4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankier",
"html_url": "https://github.com/frankier",
"followers_url": "https://api.github.com/users/frankier/followers",
"following_url": "https://api.github.com/users/frankier/following{/other_user}",
"gists_url": "https://api.github.com/users/frankier/gists{/gist_id}",
"starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/frankier/subscriptions",
"organizations_url": "https://api.github.com/users/frankier/orgs",
"repos_url": "https://api.github.com/users/frankier/repos",
"events_url": "https://api.github.com/users/frankier/events{/privacy}",
"received_events_url": "https://api.github.com/users/frankier/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Okay, I've now realised that this won't work unless it's `@classmethod`, so perhaps `get_optimizer_cls_and_kwargs` should be changed for this too? Delaying working on a PR until this has been discussed.",
"cc @pacman100 @muellerzr ",
"@frankier (sorry for the delay) why does this require it being a class method? It should just be static I believe unless you're seeing something I'm not",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,704 | 1,704 | NONE | null | ### Feature request
It's possible that some in-depth customization of the `Trainer` class may require changes to Accelerator. There is already a pattern with the optimizer that the class and arguments are obtained from:
```python
@staticmethod
def get_optimizer_cls_and_kwargs(args: TrainingArguments) -> Tuple[Any, Any]:
```
So it would be nice to have a corresponding
```python
@staticmethod
def get_accelerator_cls_and_kwargs(args: TrainingArguments) -> Tuple[Any, Any]:
```
### Motivation
Customisability
### Your contribution
I am going to submit a PR | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26686/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26685 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26685/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26685/comments | https://api.github.com/repos/huggingface/transformers/issues/26685/events | https://github.com/huggingface/transformers/pull/26685 | 1,932,621,218 | PR_kwDOCUB6oc5cPGGZ | 26,685 | [docstring] Fix docstring for `LlamaConfig` | {
"login": "pavaris-pm",
"id": 69553539,
"node_id": "MDQ6VXNlcjY5NTUzNTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/69553539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pavaris-pm",
"html_url": "https://github.com/pavaris-pm",
"followers_url": "https://api.github.com/users/pavaris-pm/followers",
"following_url": "https://api.github.com/users/pavaris-pm/following{/other_user}",
"gists_url": "https://api.github.com/users/pavaris-pm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pavaris-pm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pavaris-pm/subscriptions",
"organizations_url": "https://api.github.com/users/pavaris-pm/orgs",
"repos_url": "https://api.github.com/users/pavaris-pm/repos",
"events_url": "https://api.github.com/users/pavaris-pm/events{/privacy}",
"received_events_url": "https://api.github.com/users/pavaris-pm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh can you please review this",
"@abzdel i already passed all 7 CI test cases after following your recommendation. Thank you alex!",
"> @abzdel i already passed all 7 CI test cases after following your recommendation. Thank you alex!\r\n\r\nAwesome! Keep me in the loop if anything else comes up.",
"Hi, could you run `make fixup` 🙏 Thank you!",
"@ydshieh roger that! i will run `make fixup` ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26685). All of your documentation changes will be reflected on that endpoint.",
"@ydshieh already finished running `make fixup`. Now it passed all CI tests. Can you please review it."
] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26638 by fixing a typo in docstring of `LlamaConfig`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh @abzdel
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26685/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26685",
"html_url": "https://github.com/huggingface/transformers/pull/26685",
"diff_url": "https://github.com/huggingface/transformers/pull/26685.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26685.patch",
"merged_at": 1696950349000
} |
https://api.github.com/repos/huggingface/transformers/issues/26684 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26684/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26684/comments | https://api.github.com/repos/huggingface/transformers/issues/26684/events | https://github.com/huggingface/transformers/issues/26684 | 1,932,606,936 | I_kwDOCUB6oc5zMT3Y | 26,684 | `tokenizer.apply_chat_template` not working as expected for Mistral-7B model; it adds `<<SYS>>` despite no system message. | {
"login": "ivsanro1",
"id": 30293331,
"node_id": "MDQ6VXNlcjMwMjkzMzMx",
"avatar_url": "https://avatars.githubusercontent.com/u/30293331?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ivsanro1",
"html_url": "https://github.com/ivsanro1",
"followers_url": "https://api.github.com/users/ivsanro1/followers",
"following_url": "https://api.github.com/users/ivsanro1/following{/other_user}",
"gists_url": "https://api.github.com/users/ivsanro1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ivsanro1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ivsanro1/subscriptions",
"organizations_url": "https://api.github.com/users/ivsanro1/orgs",
"repos_url": "https://api.github.com/users/ivsanro1/repos",
"events_url": "https://api.github.com/users/ivsanro1/events{/privacy}",
"received_events_url": "https://api.github.com/users/ivsanro1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @ivsanro1, this is because an update to the Mistral model repo removed the chat template by accident, which caused the tokenizer to fall back to the default LLaMA chat ttemplate. I'm communicating with the team now, hopefully it can be restored very soon!",
"Hello @Rocketknight1, thanks a lot for the explanation, that makes a lot of sense and explains why it happened without changing transformers version.\r\n\r\nAs this is not a transformers issue per se, please feel free to close the issue when you consider it appropriate.",
"Hi @ivsanro1, the fix has been merged [here](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1/discussions/45), so I'll close the issue now. Thank you very much for the quick bug report though - this issue was what alerted us to the problem and let us fix it so quickly!"
] | 1,696 | 1,696 | 1,696 | NONE | null | ### System Info
- Colab
- `transformers==4.34.0`
### Who can help?
Maybe @ArthurZucker or @Rocketknight1
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Colab link to reproduce it [here](https://colab.research.google.com/drive/16xTa_3PawEocUleRzGei9ipXZVlgNjpS?usp=sharing)
### Expected behavior
The resulting instantiated prompt should be like the one [shown in the model card](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1#instruction-format), and it should not contain `<<SYS>>` or `<</SYS>>` delimiters, or system message at all:
```
text = "<s>[INST] What is your favourite condiment? [/INST]"
"Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!</s> "
"[INST] Do you have mayonnaise recipes? [/INST]"
```
Additional note: I was running the same code yesterday on the same version of `transformers` and it was working correctly. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26684/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26683 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26683/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26683/comments | https://api.github.com/repos/huggingface/transformers/issues/26683/events | https://github.com/huggingface/transformers/pull/26683 | 1,932,554,371 | PR_kwDOCUB6oc5cO3oi | 26,683 | Adding EncT5 model for non-autoregressive tasks | {
"login": "hackyon",
"id": 1557853,
"node_id": "MDQ6VXNlcjE1NTc4NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1557853?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hackyon",
"html_url": "https://github.com/hackyon",
"followers_url": "https://api.github.com/users/hackyon/followers",
"following_url": "https://api.github.com/users/hackyon/following{/other_user}",
"gists_url": "https://api.github.com/users/hackyon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hackyon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hackyon/subscriptions",
"organizations_url": "https://api.github.com/users/hackyon/orgs",
"repos_url": "https://api.github.com/users/hackyon/repos",
"events_url": "https://api.github.com/users/hackyon/events{/privacy}",
"received_events_url": "https://api.github.com/users/hackyon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I think this could be a small but interesting addition, but wanted to get some feedback on whether this is good idea before I write tests and update the documentation. \r\n\r\n@ArthurZucker @amyeroberts @sgugger\r\n\r\nThanks!",
"I'm not on the HF team, but I think this would be great! I was also interested in adding T5Encoder only variants of both SequenceClassification and Extractive QuestionAnswering. ",
"Hey! This is a fairly old model, not entirely sure this is still relevant let's see if the community requests this! ",
"Sounds good. There were some interest in the original issue #14097, and I pinged the thread again to see if anyone else is still interested in it.\r\n\r\nWhen I was playing around with this code, I was also tempted to just implement EncT5 as a feature in T5ForSequenceClassification instead. That would require a new config.use_enct5 and some extra if-statements to gate the differences in logic, but there's a bunch of reusable code too (like the loss computation). The advantage here is that we won't need another class just for EncT5, with less \"model sprawl\". Let me know your thoughts about this other approach.",
"Hi, yes EncT5 was designed to be more composable through configs and should require little code change by reuse existing components especially multi-label and tagging problems.\r\n\r\nYou might find the following results from ACL2023 interesting.\r\n[An Exploration of Encoder-Decoder Approaches to Multi-Label Classification for Legal and Biomedical Text](https://aclanthology.org/2023.findings-acl.360.pdf) showed that EncT5 is quite competitive when it comes to multilabel. See Table 6. \r\nPerhaps their [implementation](https://github.com/coastalcph/Multi-Label-Classification-T5) can be helpful. \r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"I would also vote for adding it into Transformers :hugs: ",
"Thanks @frederick0329 and @stefan-it! \r\n\r\n@ArthurZucker - it seems like there's still a fair bit of interest in this, both on this PR and also in #14097 . The paper Frederick linked, which is based on EncT5, was also published only a couple of months ago.\r\n\r\nI'm motivated to see this through and get the implementation, tests, and docs ready. Should I go ahead and move forward with the implementation? Thank you! 🙏",
"I am not really sure (see my next comment). Also we are going to have a little bit of an issue with the naming. We can't go with EncT5. T5Enc is too close to T5Encoder, so no as well. 😅 \r\n\r\nIf there is only a single layer of decoder what is the difference between defining an `T5ForSequenceClassification` and just setting the number of decoder layers to 1 in the config? \r\n\r\n",
"Thanks @ArthurZucker.\r\n\r\nEncT5 was designed to reuse as much of the original T5 code and with as little change as possible. That's one of the advantages of EncT5.\r\n\r\nWith that said, there are several main differences from the existing T5ForSequenceClassification:\r\n* EncT5 requires a reinitialized decoder embedding - with a smaller vocab size, just only what is necessary to trigger the proper classification. This vocab size changes depending on whether the problem type is regression, single-label classification, or multi-label classification.\r\n* There needs to be a way to reinitialize the decoder embedding, decoder layer (and only the decoder layer), and classification head weights for EncT5. The user can access the internal state of the T5ForSequenceClassification model to do this, but it would be easier for them if we created a private helper method and provide documentation for it.\r\n* The decoder input ids need to be set accordingly (either 0, or range of labels depending on context). Granted this is not difficult, but we can provide the proper default values with a separate EncT5 model.\r\n* The classification head will also change from the existing one for the multi-label classification case (or maybe even the single-label case). Each label in multi-label classification will require their own head, and so it won't make sense to reuse the existing one as there is a [sizable dense layer](https://github.com/huggingface/transformers/blob/51042ae8e5df8275d16b3eaff36c20fb9c191655/src/transformers/models/t5/modeling_t5.py#L781) in it. We will need something more like [this](https://github.com/coastalcph/Multi-Label-Classification-T5/blob/4741edcf4e86fa6023ce125f24dd0bb3a8ebfd2f/models/t5_classifier.py#L82) (which is mostly just a projection without the extra dense layer).\r\n* In multi-label classification, because of the need to provide separate heads per label, we will need to some reshaping of outputs to make things work. For single-label classification and regression, the index of the output to examine is also different from the existing T5ForSequenceClassification. \r\n* There may be additional changes required. I think the main one that comes to mind is the default casual mask which may need to be disabled for the EncT5 case. I still need to look more into this issue though.\r\n\r\nAs I mentioned above [comment](https://github.com/huggingface/transformers/pull/26683#issuecomment-1756943763), I think we can include these changes in the existing T5ForSequenceClassification, but provide a use_enct5 flag in the config and flag the logic when necessary. We'd see some extra if-statements in the code, and the logic will be harder to follow.\r\n\r\nOr perhaps we could use a name like \"EncT5ForSequenceClassification\"? To me, that doesn't conflict with T5Encoder, and also implies that the new class is a variant of T5ForSequenceClassification. ",
"Hey! Thanks for iterating. \r\n> There needs to be a way to reinitialize the decoder embedding, decoder layer (and only the decoder layer), and classification head weights for EncT5. The user can access the internal state of the T5ForSequenceClassification model to do this, but it would be easier for them if we created a private helper method and provide documentation for it.\r\n\r\nIs that something that needs to be done during inference? Or even during runtime? If yes it's a completely different logic from T5 and would need a new model. If not, there's absolutely no reason to just properly convert the checkpoints. \r\n\r\n> The decoder input ids need to be set accordingly (either 0, or range of labels depending on context). Granted this is not difficult, but we can provide the proper default values with a separate EncT5 model.\r\n\r\nAgain this just means that the user has to give something he is already supposed to give no? \r\n\r\n\r\n> There may be additional changes required. I think the main one that comes to mind is the default casual mask which may need to be disabled for the EncT5 case. I still need to look more into this issue though.\r\n\r\nMmm still seem like the user can just pass the correct attention mask no? ",
"Thanks for checking in again! Let me know if you have any other questions. I'm happy to do a 1/2-pager write-up design if you think it'd help.\r\n\r\n> Hey! Thanks for iterating.\r\n> \r\n> > There needs to be a way to reinitialize the decoder embedding, decoder layer (and only the decoder layer), and classification head weights for EncT5. The user can access the internal state of the T5ForSequenceClassification model to do this, but it would be easier for them if we created a private helper method and provide documentation for it.\r\n> \r\n> Is that something that needs to be done during inference? Or even during runtime? If yes it's a completely different logic from T5 and would need a new model. If not, there's absolutely no reason to just properly convert the checkpoints.\r\n\r\nThe reinitialization happens after loading a pre-trained model and before fine-tuning. The weights don't need to be changed or reinitialized during inference or runtime.\r\n\r\nI think we could convert some checkpoints that already have the weights re-initialized. However, would the converted checkpoints support different config values (num_labels, problem_types, and projection sizes) that may be supported by EncT5? It's my understanding that the weight matrices would have different shapes depending on these config values, so I'm not sure converted checkpoints would work for all the cases (although I don't know enough about converted checkpoints to know if this is possible).\r\n\r\nFundamentally, the \"purely model\" differences between T5 and EncT5 is in the classification head and the decoder embedding (which isn't shared with the encoder embedding). The other differences can be worked around with the proper config and inputs, but the user will likely need to dig into the internals of the implementation to figure out how to supply the proper config and inputs. \r\n\r\n> \r\n> > The decoder input ids need to be set accordingly (either 0, or range of labels depending on context). Granted this is not difficult, but we can provide the proper default values with a separate EncT5 model.\r\n> \r\n> Again this just means that the user has to give something he is already supposed to give no?\r\n\r\nYup, this is purely for convenience. With that said, the decoder_input_ids that are given to EncT5 are fundamentally different from those given to T5, so it should be easier for the user.\r\n\r\n> \r\n> > There may be additional changes required. I think the main one that comes to mind is the default casual mask which may need to be disabled for the EncT5 case. I still need to look more into this issue though.\r\n> \r\n> Mmm still seem like the user can just pass the correct attention mask no?\r\n\r\nI took another look and it seems like the [default causal mask logic](https://github.com/huggingface/transformers/blob/c030fc891395d11249046e36b9e0219685b33399/src/transformers/modeling_utils.py#L917-L922) would not be triggered if the user provided [a 3 dimensional attention mask](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py#L913-L914). It is possible, but not easy for the user to know about this. We could probably use this mechanism ourselves for EncT5 to supply the attention mask and override the default causal masking logic.\r\n\r\n\r\n",
"Hey, I don't think we are aligned with what a new model implies. \r\nA new model in transformers means that the model is different (no in terms of weights, vocab size or decoder / encoder embeddings), not in terms of which inputs are given to the model, but in terms of architecture. So if there is indeed a difference, then we might need to add some code.\r\n\r\n> Fundamentally, the \"purely model\" differences between T5 and EncT5 is in the classification head and the decoder embedding (which isn't shared with the encoder embedding). \r\n\r\n- the shared embedding can be controlled by loading the proper weights in the proper layers. \r\n- the classification head should already be controlled with `config.num_labels)`. I might have missed the other changes?\r\n\r\nIf the model you are adding needs a new conversion script, you could also just link the script in the ressources for example or add the script here. \r\n\r\nLast but not least, let me iterate again, `EncT5` can never fit in the T5 repo, it's just not consistent with the entire library! \r\n🤗 ",
"Model is an overloaded term, and so I can see where the confusion happens! Sorry if I haven't been clear in my explanation of EncT5. Perhaps we can think of EncT5 as a feature/variant of the existing T5ForSequenceClassification, rather than a completely new \"model\".\r\n\r\nWhen I said the two \"purely model\" differences, what I meant were the architectural differences that require code change. I don't think there is a way to get around this without code change (but please correct me if I'm wrong):\r\n\r\n- Shared Word Embedding (specifically, the word embedding that converts input_ids to input_embeds): T5 currently uses a [shared embedding](https://github.com/huggingface/transformers/blob/32f799db0d625ec5cf82624ff2604c5a891ebf61/src/transformers/models/t5/modeling_t5.py#L1365-L1378) between the encoder and decoder. They are the same instance in the code. For EncT5, we will need two different embeddings. This means we will need to instantiate a complete new embedding, and set it for the decoder. \r\n\r\n- Classification Head: There are a couple of (smaller) differences here. For regression and single-label classification, the classification head logic should be similar to the existing T5ForSequenceClassification, but we need to change [this part](https://github.com/huggingface/transformers/blob/32f799db0d625ec5cf82624ff2604c5a891ebf61/src/transformers/models/t5/modeling_t5.py#L2070-L2078) since the eos_mask it computes does not work for EncT5. For multi-label classification, EncT5 uses a separate classification head per label, instead of a single classification head that projects out to multiple labels. \r\n\r\nI am definitely in favor of keeping the T5 repo clean, and am just hoping iterate with you on a proper design that would work! Let's drop the EncT5 naming and extra class if it doesn't fit inside the T5 repo. \r\n\r\nSince the architectural changes are minimal, perhaps we can use my earlier [proposal](https://github.com/huggingface/transformers/pull/26683#issuecomment-1756943763) of adding these architectural changes into the existing T5ForSequenceClassification, but gating it on a flag? I can make this change so it would be easier for you to visualize the necessary architectural changes.\r\n",
"Nice the last comment is perfect thanks! \r\nYou can:\r\n- add a new Class called T5ForXXXX with an appropriate name at the end of the file\r\n- add the necessary changes (nothing should be changed in previous T5code)\r\n- make sure you have tests (integrations tests) and documentation regarding what this new class adds and allows people to do (basically translating the motivations behind adding this). \r\n\r\nThe T5ForSequenceClassifcation should not be changed 😉 ",
"Sounds good, thanks @ArthurZucker! 🙏\r\n\r\nI'll iterate and write the necessary tests and documentation for it over the coming week.",
"FYI: a medical issue came up and I'll need a couple more days, but I'm still working on it.",
"@ArthurZucker \r\n\r\nAs I was working through the tests, I ended up reading up more on some other variants of the T5 library, such umT5, mT5, ByT5, and LongT5. It seems like many of these share a lot of code T5, and are just forked from the T5 directory with a lot of \"fix-copies\" dependencies. \r\n\r\nThinking about it more, how would you feel if we created a new directory named encT5, with a fork of the basic T5 directory code and \"fix-copies\" dependencies? \r\n\r\nThe relationship between T5 and mT5 (with mT5 being a variant of T5) is similar to the relationship between T5 and encT5 (with encT5 being a variant of T5). This feels more consistent than adding a T5ForEncoderBasedSequenceClassification class to the T5 repo.\r\n\r\nThanks for your input!",
"> @ArthurZucker\r\n> \r\n> As I was working through the tests, I ended up reading up more on some other variants of the T5 library, such umT5, mT5, ByT5, and LongT5. It seems like many of these share a lot of code T5, and are just forked from the T5 directory with a lot of \"fix-copies\" dependencies.\r\n> \r\n> Thinking about it more, how would you feel if we created a new directory named encT5, with a fork of the basic T5 directory code and \"fix-copies\" dependencies?\r\n> \r\n> The relationship between T5 and mT5 (with mT5 being a variant of T5) is similar to the relationship between T5 and encT5 (with encT5 being a variant of T5). This feels more consistent than adding a T5ForEncoderBasedSequenceClassification class to the T5 repo.\r\n> \r\n> Thanks for your input!\r\n\r\nHere is a draft PR for this particular idea:\r\nhttps://github.com/huggingface/transformers/pull/27472/files\r\n\r\nThere seems to be more code, but in reality, it's more-or-less the same as this one, but with fix-copies on the T5Models. This looks cleaner IMHO, especially given the new configurations, and also isolates the changes to a new file rather than updating the existing T5 models.\r\n\r\nLet me know your thoughts, thanks! 🙏",
"Thanks for looking over the change!\r\n\r\nThe classification head class is indeed one of 2 architectural changes, the other being the decoder embedding (with a different vocab_size for the decoder). Overall, the changes indeed look similar to ForTokenClassification, but the use case is quite different and we are still doing sequence classification. Furthermore, for our model, the decoder_input_ids are constants, rather than shifted input_ids.\r\n\r\nOne big takeaway is that it could be worth creating a new configuration class for our model, so we can properly set the num_decoder_layer and decoder_vocab_size. I think this new approach of creating a new model directory, and using fix-copies on the existing t5, can help achieve this: https://github.com/huggingface/transformers/pull/27472/files. Do you mind taking a look at that change and see if it makes sense to you?\r\n\r\nThanks! 🙏",
"Hey! \r\nNo I think the best path here is to add `T5ForTokenClassification` and create a Config (an instance not a class) with something like:\r\n```python \r\n>>> from transformers import T5Config\r\n>>> config = T5Config(\r\n num_decoder_layers = 1\r\n)\r\n```\r\nand pushing this to the hub. \r\nThen the checkpoints should be converted and loaded into a `T5ForTokenClassification` which yes does not have the appropriate name but can be used for both tasks so that's a plus! \r\nIf the decoder_input_ids are constant, they ca be passed to the model (and saved) through the GenrationConfig see [here](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig.decoder_start_token_id)\r\n",
"Thanks for the input. \r\n\r\nI don't think ForTokenClassification will work in this case. Aside from having an entirely different name and use case as intended, the loss calculation is also completely different. \r\n\r\nAs for the config, the existing T5Config does not have a decoder_vocab_size variable, so we won't be able to control that configuration. We can manually set it to 1 or num_labels depending on the problem_type (this would be the default value of the decoder_vocab_size anyways), so this is a viable solution if need be.",
"The loss computation can be done outside of the model and is usally what we recommend, the labels can just not be passed for each custom usage. \r\n\r\nThe missing attributes can be dealt by just saving them in the config (which is a general class that can save and load other attributes see:\r\n```python \r\nfrom transformers import T5Config\r\nconfig = T5Config(my_special_param = \"my_param\")\r\nconfig.save_pretrained(\"/tmp/tests\")\r\n T5Config.from_pretrained(\"/tmp/tests\")\r\n```\r\n",
"Hey @hackyon tldr is I think ForTokenClassification will be the best way to solve the issue 🤗 do you still feel like adding it? ",
"Yea sure, I can go ahead and add the ForTokenClassification variant. It looks simple enough and might be useful. I'll start another PR for that.",
"Feel free to mention in the documenbtation of the t5.md that this class should be use for EncT5 tasks for example ",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This one's closed, but just added a final write-up on the issue:\r\n\r\n* EncT5 doesn't really fit into the T5 official HuggingFace repo, as it's inconsistent with the entire library.\r\n\r\n* Instead, we added T5ForTokenClassification in #28443. T5ForTokenClassification is encoder only, and has a classification head and it should be possible to repurpose it for encoder-only sequence classification tasks as well.\r\n\r\n* I ended up cleaning up and pushing the code in this PR onto the HuggingFace model hub at https://huggingface.co/hackyon/enct5-base (needs fine tuning). It should be easily accessible through `model = AutoModelForSequenceClassification.from_pretrained(\"hackyon/enct5-base\", trust_remote_code=True)`. The code is also available on [github](https://github.com/hackyon/EncT5). Feel free to play around with it, and let me know what you think.\r\n\r\nThanks for everyone's help and support on this issue!"
] | 1,696 | 1,708 | 1,707 | CONTRIBUTOR | null | Adding a EncT5 model/framework for non-autoregressive tasks, as described in https://arxiv.org/abs/2110.08426 (Algorithm 1 for now, but I plan to follow up with support for Algorithm 2). There is already a T5ForSequenceClassification variant, but EncT5 uses less params (just a single decoder layer instead) and, according to the paper, achieves similar results in the benchmarks as BERT and T5.
Context in #14097
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker @amyeroberts @sgugger @sjrl
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26683/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26683/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26683",
"html_url": "https://github.com/huggingface/transformers/pull/26683",
"diff_url": "https://github.com/huggingface/transformers/pull/26683.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26683.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26682 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26682/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26682/comments | https://api.github.com/repos/huggingface/transformers/issues/26682/events | https://github.com/huggingface/transformers/pull/26682 | 1,932,540,728 | PR_kwDOCUB6oc5cO0ra | 26,682 | Fixed KeyError for Mistral | {
"login": "MatteoRaso",
"id": 33975162,
"node_id": "MDQ6VXNlcjMzOTc1MTYy",
"avatar_url": "https://avatars.githubusercontent.com/u/33975162?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MatteoRaso",
"html_url": "https://github.com/MatteoRaso",
"followers_url": "https://api.github.com/users/MatteoRaso/followers",
"following_url": "https://api.github.com/users/MatteoRaso/following{/other_user}",
"gists_url": "https://api.github.com/users/MatteoRaso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MatteoRaso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MatteoRaso/subscriptions",
"organizations_url": "https://api.github.com/users/MatteoRaso/orgs",
"repos_url": "https://api.github.com/users/MatteoRaso/repos",
"events_url": "https://api.github.com/users/MatteoRaso/events{/privacy}",
"received_events_url": "https://api.github.com/users/MatteoRaso/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Ok, I changed it. The reason why I put a try catch was in case there were mistral checkpoints that actually did use \"ragged_attention\". Is it possible that the torrent release and the HF release are slightly different? That might explain how this happened.",
"I am not really sure I try downloading from the link they share in their repo: https://github.com/mistralai/mistral-src#download-the-model. Would you mind checking to make sure the publicly shared checkpoints all use sliding window? 🤗 ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26682). All of your documentation changes will be reflected on that endpoint.",
"I just downloaded the torrent checkpoint, and it also uses \"sliding_window\". Between that, HF, and Github, I'm pretty sure that's all the releases.\n\n------- Original Message -------\nOn Wednesday, October 11th, 2023 at 2:47 AM, Arthur ***@***.***> wrote:\n\n> I am not really sure I try downloading from the link they share in their repo: https://github.com/mistralai/mistral-src#download-the-model. Would you mind checking to make sure the publicly shared checkpoints all use sliding window? 🤗\n>\n> —\n> Reply to this email directly, [view it on GitHub](https://github.com/huggingface/transformers/pull/26682#issuecomment-1756947549), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/AIDGW6VWHLFLKB4CDNDJVZDX6Y6HLAVCNFSM6AAAAAA5YNENQWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONJWHE2DONJUHE).\n> You are receiving this because you authored the thread.Message ID: ***@***.***>",
"Cool let's merge then! "
] | 1,696 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes KeyError for Mistral. convert_mistral_weights_to_hf.py checks for the key "ragged_attention", but the HF release of mistral uses "sliding_window" instead.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26682/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26682",
"html_url": "https://github.com/huggingface/transformers/pull/26682",
"diff_url": "https://github.com/huggingface/transformers/pull/26682.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26682.patch",
"merged_at": 1697210427000
} |
https://api.github.com/repos/huggingface/transformers/issues/26681 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26681/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26681/comments | https://api.github.com/repos/huggingface/transformers/issues/26681/events | https://github.com/huggingface/transformers/pull/26681 | 1,932,452,330 | PR_kwDOCUB6oc5cOhbV | 26,681 | Generate: New `Cache` abstraction and Attention Sinks support | {
"login": "tomaarsen",
"id": 37621491,
"node_id": "MDQ6VXNlcjM3NjIxNDkx",
"avatar_url": "https://avatars.githubusercontent.com/u/37621491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomaarsen",
"html_url": "https://github.com/tomaarsen",
"followers_url": "https://api.github.com/users/tomaarsen/followers",
"following_url": "https://api.github.com/users/tomaarsen/following{/other_user}",
"gists_url": "https://api.github.com/users/tomaarsen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tomaarsen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tomaarsen/subscriptions",
"organizations_url": "https://api.github.com/users/tomaarsen/orgs",
"repos_url": "https://api.github.com/users/tomaarsen/repos",
"events_url": "https://api.github.com/users/tomaarsen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tomaarsen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@patrickvonplaten @gante \r\nBased on your feedback, I intend to make the following changes (when I have a bit more time). \r\n\r\n1. move `layer_idx` to `update(key_states, value_states, layer_idx)` rather than storing it as a class attribute on `Cache`. This also involves adding `layer_idx` as class attributes to e.g. `LlamaAttention` and `LlamaDecoderLayer`. This also removes the `set_layer_index` magic from `Cache`.\r\n2. convert `past_key_values` to Cache instance (if not already) at the start of LlamaAttention.forward. This avoids all `isinstance(...)` calls in `LlamaAttention`, and removes the need for the black magicky `__bool__`.\r\n3. convert back to tuple of tuples when returning if some `use_legacy_cache` flag is True. (should this flag be propagated all the way up to `LlamaModel`?)\r\n4. use separate `key_cache` and `value_cache` dicts in the `Cache` instance for efficiency (removes a `torch.cat`) and simplicity.\r\n\r\nRegarding https://github.com/huggingface/transformers/pull/26681#discussion_r1350368080 I would have to experiment. There might be options, but they'll probably be slower than necessary.",
"`past_key_values` and caching will be an incredible feature.\r\n\r\nTwo PRs should be made irrelevant if this merges: https://github.com/huggingface/transformers/pull/25086 https://github.com/huggingface/transformers/pull/17574",
"> @patrickvonplaten @gante Based on your feedback, I intend to make the following changes (when I have a bit more time).\r\n> \r\n> 1. move `layer_idx` to `update(key_states, value_states, layer_idx)` rather than storing it as a class attribute on `Cache`. This also involves adding `layer_idx` as class attributes to e.g. `LlamaAttention` and `LlamaDecoderLayer`. This also removes the `set_layer_index` magic from `Cache`.\r\n> 2. convert `past_key_values` to Cache instance (if not already) at the start of LlamaAttention.forward. This avoids all `isinstance(...)` calls in `LlamaAttention`, and removes the need for the black magicky `__bool__`.\r\n> 3. convert back to tuple of tuples when returning if some `use_legacy_cache` flag is True. (should this flag be propagated all the way up to `LlamaModel`?)\r\n> 4. use separate `key_cache` and `value_cache` dicts in the `Cache` instance for efficiency (removes a `torch.cat`) and simplicity.\r\n> \r\n> Regarding [#26681 (comment)](https://github.com/huggingface/transformers/pull/26681#discussion_r1350368080) I would have to experiment. There might be options, but they'll probably be slower than necessary.\r\n\r\nSounds great!",
"Addressed the various comments. Beyond that, I also made `past_key_values.update(key, value, idx)` returns `key, value` as you'd expect. Manual generation (i.e. repeated calling of a `LlamaForCausalLM` instance with the `past_key_values`) works well, I even see a ~2% speedup, but don't quote me on that speedup. `model.generate` doesn't work yet because `use_legacy_flag` defaults to None, i.e. Falsey, and `model.generate` doesn't work with the cache yet. @patrickvonplaten @gante Should we go for:\r\n1. Immediately update `model.generate` to work with Cache instances or,\r\n2. Insert `use_legacy_cache=True` as default for `model.generate` until this can be removed in some later PR?\r\n\r\nThis is all under the assumption that the PR is heading in the right direction 😄 \r\n\r\nAs a heads up, the sink cache does not work yet. I still need to do some experiments to see if I can store rotated keys and back-rotate + forward-rotate them when the cached keys are requested, rather than storing non-rotated keys. That is what I'll work on next. \r\n\r\nThat leaves me with an additional question: each architecture requires slightly different key rotations. I'd like to implement this to be sufficiently adaptable, e.g. allowing architecture-specific functionality in `src/transformers/models/<architecture>/cache_utils.py`. However, what the relation to classes or functions in this file should be to the `src/transformers/cache_utils.py` cache classes is still unclear to me. In short: how do we allow an architecture to e.g. have slightly different implementations for the `SinkCache` or the `DynamicCache`?\r\n\r\n- Tom Aarsen",
"Hey @tomaarsen 👋 \r\n\r\n### `generate` compatibility\r\nRe `generate` compatibility: usually, I'm pro small PRs, tackling one problem at a time. However, since `generate` is the main gate to LLMs in `transformers`, and caching is only needed for auto-regressive generation, day 0 support is important -- at least `gready_search` and `sample` should be operational by the time this PR can be merged. Otherwise, we might get ourselves in a position where we realize we need to rewrite a significant part of `Cache`/`generate` to enable them together, which is very undesirable. However, I'm pro having this PR enabling the new cache on a single model (Llama) and taking care of the other models later as needed 🤗 I can also give a hand on the `generate` side, when we are happy with the state of the PR with the exception of `generate`.\r\n\r\n### model-specific cache changes\r\nI do expect some models to require different code to convert back and forth from the legacy cache format (not all models have the same cache format, there are ~5% of them have slight differences e.g. [see this in BLOOM](https://github.com/huggingface/transformers/blob/288bf5c1d2844e89a2a87aeee90033532335e2e6/src/transformers/models/bloom/modeling_bloom.py#L504)). You are also writing that RoPE-based models may also need custom logic. \r\n\r\nThe model-specific cache modification part is an important design decision that will have an impact over many versions of `transformers` to come, so I'd also like to hear your thoughts @tomaarsen @patrickvonplaten \r\n\r\nEDIT: what follows bellow is my \"plan B\" suggestion, read my next comment for the \"plan A\" :D\r\nTo me, that suggests five things:\r\n1) a model may need to subclass the base `Cache` to implement its own nuances regarding the cache format\r\n2) despite the above, the cache operations and output format are standardized for each type of cache, so we would benefit from a strong base class (as we do in the config files or in the base pretrained class).\r\n3) I suspect the model-specific changes for the cache would be small, so we could keep them in the modeling file. We can always move to a new file if needed :)\r\n4) Because of 1), each model would have to implement a thin wrapper class to the instantiable cache classes, even if the base class is fully compatible. This also goes in line with our philosophy in `tramsformers` where each model implements its own model-specific operations.\r\n5) Because of 4), in practice, each model defines its own cache. This means users are free to write their own custom caches -- power to the users 💪 \r\n\r\n",
"Upon some shower thoughts, I've come across an alternative plan for the model-specific cache modification problem -- easier to implement and that would result in more readable code. \r\n\r\nInstead of the base `Cache` holding the code to convert to and from the legacy format (which then requires subclassing at each model, if it has a different legacy cache format), the conversion logic could be held in model-specific functions to convert to and from the legacy format. In other words, each model would implement a `to_legacy_cache` and a `from_legacy_cache`, and the different types of `Cache` would be immediately available to the model with no further modifications!",
"> lding the code to convert to and from the legacy format (which then requires subclassing at each model, if it has a different legacy cache format)\r\n\r\nAgree here! Think it would be better if we only have a few selected cache classes that work for all models. The functions `from_legacy_cache` and `to_legacy_cache` indeed need to be model specific, so I think they can be just stand-alone functions in each `modeling_....py` file. \r\n\r\n=> Thus, I think we can:\r\n- a) Have all `Cache` classes in a general `generation/cache.py` file. These general cache implementations should be identical for all models\r\n- b) For backwards compatibility each model needs specific `from_legacy_cache` and `to_legacy_cache` functions, but they don't need to be part of the Cache class IMO, they can just be stand-alone functions in each model class so that they are easier to deprecate going forward.",
"I just noticed that Joao's message has been edited, and I missed Patrick's message, so my response doesn't make much sense anymore - I deleted it.\r\n\r\nI also prefer the `from_legacy_cache` and `to_legacy_cache` implementations. I'll get on it.",
"@gante @tomaarsen This is really a good abstraction of kv_cache to enable window context and be compatible to the legacy kv_cache. But for the memory footprint, the 'torch.cat' are still needed when update the cache and reorder_cache using 'index_select' is also there with beam search. \r\nDo you have a plan to use the pre-allocate buffer to avoid 'torch.cat'? The pre-allocated buffer should be compatible with semantic of your Cache. For the cache.update operation, token slots from the pre-allocated buffer are needed to store key/value token states. ",
"I removed the commits regarding changing the order of the caches based on @gante's recommendation. He is working on more changes for this PR here: https://github.com/tomaarsen/transformers/pull/1",
"@liangan1 both are issues in the near-future roadmap, with this cache abstraction being a requirement! 🙌 \r\n\r\nFirst we will work on pre-allocated buffers (a subclass of `Cache`), which result in significant speedups (especially with `torch.compile`). Then we will simplify beam search into an XLA-friendly method, just like in our TF and JAX implementations.",
"> @liangan1 both are issues in the near-future roadmap, with this cache abstraction being a requirement! 🙌\r\n> \r\n> First we will work on pre-allocated buffers (a subclass of `Cache`), which result in significant speedups (especially with `torch.compile`). Then we will simplify beam search into an XLA-friendly method, just like in our TF and JAX implementations.\r\n\r\nWow, cool. Can you share more details about your roadmap? e.g., the pre-allocate buffers. In fact, we also have implemented a indirect access kv_cache in [Intel-extension-for-pytorch optimization for LLM](https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/llm.html#indirect-access-kv-cache) which can be used for both greedy and beam and we are pleasure to contribute code if I can know your design about the pre-allocated buffer.",
"> Should we roll out the new cache feature for the most important models and aim to get this PR in soon ?\r\n\r\nYes! @gante has picked up the work in https://github.com/tomaarsen/transformers/pull/1 to get `generate()` working, and has proposed to implement the cache for Llama, Mistral and Persimmon + some tests. Once he has completed that work, I can merge it into this PR, and we can get it merged into `main`.\r\n\r\nDoes that sound like a plan?",
"This PR is now generate-compatible, and has tests to ensure the two formats are interchangeable 💪 \r\n\r\nThe suggested review order is `cache_utils.py` -> `modeling_llama.py` -> `test_utils.py` -> others\r\n\r\n@patrickvonplaten in the added test, we confirm that the two formats are interchangeable: they can be converted back and forth and be manipulated the same way. However, under the hood, the updated models use the new cache internally. It would be best to add a long slow test to key models (Llama and Mistral here) before merging this PR, correct?\r\n\r\n@tomaarsen can you confirm that the attention sinks are working as expected?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26681). All of your documentation changes will be reflected on that endpoint.",
"@patrickvonplaten applied the suggested changes 👍 I agree they the full abstraction will be better in the long run.\r\n\r\nTwo minor modifications over your suggestions:\r\n1. Replaced the `__init__` `warn` by `warning_once`, otherwise the user would get `n_layers` repeated warnings\r\n2. In the beam methods, to check against `Cache`, we needed to perform the check before calling `_reorder_cache`",
"Will review tomorrow ! ",
"@ArthurZucker depending on your answer to [this comment](https://github.com/huggingface/transformers/pull/26681#discussion_r1411079517), the PR will move very differently -- I'll wait for your answer before continuing :)",
"Ready for a new round of reviews. IMO, it's now much better than before: no new flags, `generate` compatibility for any `Cache` subclass, among others.\r\n\r\n@patrickvonplaten @ArthurZucker:\r\n- There is no more `use_legacy_cache` flag. If `past_key_values` is a `Cache` instance (which can be an empty cache), the model sticks to that format. No flags to deprecate in the future :)\r\n- Because of the point above, if we pass `past_key_values=some_cache_instance` to `generate`, it will use that cache instance (and, therefore, format). `generate` is ready to receive static caches! See the generation test for an example.\r\n- The caches now accept a `cache_kwargs` dictionary in `update` (its is not a `**dict`). It is the responsibility of each subclass to process this dictionary. It should gives us plenty of room (and flexibility) to grow without exploding the signature.\r\n- `Cache` has a `reorder_cache` method, as suggested. This also means we can start thinking about standardizing cache formats and removing per-model functions;\r\n- Added tons of docstrings\r\n\r\n@tomaarsen:\r\n- Added support for partial rotation in `SinkCache`. An extra set of eyes there would be appreciated! :)\r\n\r\n~⚠️ There is a bug with beam search + legacy cache format. I suspect it is a subtle dimension-related bug, but the PR can be reviewed while I chase it.~ EDIT: bug sorted",
"I've ran perplexity tests for Attention Sinks with Persimmon, Llama and Mistral, and they all look good in terms of performance, VRAM and latency. I plan to do some more tests with Phi tomorrow, but I assume it's all good other than the above 3 comments.\r\nI'll also try to use SinkCache with `generate`.",
"Let's get this one in for the release no? cc @ArthurZucker @amyeroberts @LysandreJik ",
"Yep \r\n",
"I am going to trigger the CI now (with a reabse on main in order to include the fix #27887",
"https://github.com/huggingface/transformers/actions/runs/7129508785"
] | 1,696 | 1,708 | 1,702 | MEMBER | null | Closes #26553
Hello!
# What does this PR do?
I had a few hours on Saturday to work up a draft version of the updated KV caching mechanism as discussed in #26553. Ideally, this should allow Attention Sinks (https://github.com/tomaarsen/attention_sinks) / StreamingLLM (https://arxiv.org/abs/2309.17453) to be easily implemented in a third-party or in transformers directly.
The implementation doesn't work well yet, as the VRAM usage quickly shoots up after generating even just 8 tokens. This is probably some bug that I haven't had time for yet. There's a few other comments that I have on specific sections of code, so I'll write some comments below.
## Goal for this draft
The intention for this draft is to continue discussion about whether this is moving in the right direction, and to determine the scope (e.g. do we want to include this updated `Cache` for all architectures that use KV caching?).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
@gante
@LysandreJik
@Guangxuan-Xiao
- Tom Aarsen
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26681/reactions",
"total_count": 10,
"+1": 7,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26681/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26681",
"html_url": "https://github.com/huggingface/transformers/pull/26681",
"diff_url": "https://github.com/huggingface/transformers/pull/26681.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26681.patch",
"merged_at": 1702022418000
} |
https://api.github.com/repos/huggingface/transformers/issues/26680 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26680/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26680/comments | https://api.github.com/repos/huggingface/transformers/issues/26680/events | https://github.com/huggingface/transformers/issues/26680 | 1,932,338,701 | I_kwDOCUB6oc5zLSYN | 26,680 | BLOOM past_key_values issue | {
"login": "Kowsher",
"id": 16461536,
"node_id": "MDQ6VXNlcjE2NDYxNTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/16461536?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Kowsher",
"html_url": "https://github.com/Kowsher",
"followers_url": "https://api.github.com/users/Kowsher/followers",
"following_url": "https://api.github.com/users/Kowsher/following{/other_user}",
"gists_url": "https://api.github.com/users/Kowsher/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Kowsher/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kowsher/subscriptions",
"organizations_url": "https://api.github.com/users/Kowsher/orgs",
"repos_url": "https://api.github.com/users/Kowsher/repos",
"events_url": "https://api.github.com/users/Kowsher/events{/privacy}",
"received_events_url": "https://api.github.com/users/Kowsher/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,696 | 1,697 | 1,697 | NONE | null | ### System Info
I used a custom plast_key_values by following this description
https://huggingface.co/docs/transformers/model_doc/bloom#transformers.BloomModel.forward.past_key_values
for example for batch size 2 and seq length 3
i get plast_key_values[0][0] shape as orch.Size([32, 128, 3]) and plast_key_values[0][1] as torch.Size([32, 3, 128])
But getting this error
RuntimeError: The expanded size of the tensor (4) must match the existing size (8) at non-singleton dimension 2. Target sizes: [32, 1, 4]. Tensor sizes: [32, 1, 8]
I'm using transformers==4.34.0 in google colab
### Who can help?
_No response_
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
N/A
### Expected behavior
N/A | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26680/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26679 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26679/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26679/comments | https://api.github.com/repos/huggingface/transformers/issues/26679/events | https://github.com/huggingface/transformers/pull/26679 | 1,932,254,097 | PR_kwDOCUB6oc5cN13f | 26,679 | SwinModel docstring fix | {
"login": "shivanandmn",
"id": 21271698,
"node_id": "MDQ6VXNlcjIxMjcxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/21271698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shivanandmn",
"html_url": "https://github.com/shivanandmn",
"followers_url": "https://api.github.com/users/shivanandmn/followers",
"following_url": "https://api.github.com/users/shivanandmn/following{/other_user}",
"gists_url": "https://api.github.com/users/shivanandmn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shivanandmn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shivanandmn/subscriptions",
"organizations_url": "https://api.github.com/users/shivanandmn/orgs",
"repos_url": "https://api.github.com/users/shivanandmn/repos",
"events_url": "https://api.github.com/users/shivanandmn/events{/privacy}",
"received_events_url": "https://api.github.com/users/shivanandmn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh resolved merge conflicts - ready for ✅ Thanks!",
"@ydshieh \r\nI have updated changes. This is my first open-source contribution in the life.\r\nI happy to learn from community.\r\nThanks ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26679). All of your documentation changes will be reflected on that endpoint."
] | 1,696 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Docstring for `SwinModel` is added.
Fixes # (issue)
## Before submitting
- [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. https://github.com/huggingface/transformers/issues/26638
- [ x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ydshieh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26679/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26679/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26679",
"html_url": "https://github.com/huggingface/transformers/pull/26679",
"diff_url": "https://github.com/huggingface/transformers/pull/26679.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26679.patch",
"merged_at": 1697032412000
} |
https://api.github.com/repos/huggingface/transformers/issues/26720 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26720/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26720/comments | https://api.github.com/repos/huggingface/transformers/issues/26720/events | https://github.com/huggingface/transformers/issues/26720 | 1,935,284,684 | I_kwDOCUB6oc5zWhnM | 26,720 | Using with vLLM and runpod gives error stating Repo id must be in the form 'repo_name'... | {
"login": "rafa-9",
"id": 92696534,
"node_id": "U_kgDOBYZv1g",
"avatar_url": "https://avatars.githubusercontent.com/u/92696534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafa-9",
"html_url": "https://github.com/rafa-9",
"followers_url": "https://api.github.com/users/rafa-9/followers",
"following_url": "https://api.github.com/users/rafa-9/following{/other_user}",
"gists_url": "https://api.github.com/users/rafa-9/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafa-9/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafa-9/subscriptions",
"organizations_url": "https://api.github.com/users/rafa-9/orgs",
"repos_url": "https://api.github.com/users/rafa-9/repos",
"events_url": "https://api.github.com/users/rafa-9/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafa-9/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3817266200,
"node_id": "MDU6TGFiZWwzODE3MjY2MjAw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": null
}
] | closed | false | null | [] | [
"Hi @rafa-9, [here is the code](https://github.com/huggingface/transformers/blob/a9862a0f495bac3a6ecd5165686610fd5d91c848/src/transformers/configuration_utils.py#L664) that checks whether the provided value is a local path, a url or a repo_id and load the config file correspondingly. If you get this error, I suspect that the local path does not exist or is not a folder.\r\n\r\nI am transferring this issue to the `transformers` repository as it is not really `huggingface_hub` related. Hfh is the underlying library that download files from the Hub. The validation error you are getting is raised because it is not a model_id (and is not meant to be compatible with paths). @ArthurZucker @amyeroberts I think it would be a good improvement if `transformers` can catch the `HFValidationError` when calling huggingface_hub with a `model_path_or_name` value. I remember that it's not the first time it happens and I think a clearer (custom) message would be better. Something like `Incorrect local_path_or_model_id: '/runpod-volume/Mistralic-7B-1-AWQ'. Please provide either the path to a local folder or the repo_id of a model on the Hub.`. WDYT?",
"Agreed 😉 will open a PR"
] | 1,696 | 1,700 | 1,700 | NONE | null | ### Describe the bug
I have tried [multiple templates](https://github.com/runpod-workers/worker-vllm) that use vLLM and deploy them to Runpod but the deployment is prevented by this bug where it checks for the repo_id.
For faster deployments Runpod suggests to download the model to an external storage which is usually mounted on `/runpod-volume/`. Thus any model that is downloaded is stored in the location `/runpod-volume/namespace/repo_name`.
The [checks in the validator](https://github.com/huggingface/huggingface_hub/blob/5d2d297084b230d7725b9ccafe6eb9a7f7c9a40e/src/huggingface_hub/utils/_validators.py#L123) prevent the models from loading.
How can we bypass this issue? Thanks!
### Reproduction
Deploy any model on Runpod using vLLM template
(Template examples:
1. https://github.com/winglian/worker-vllm-new
2. https://github.com/anthonypoe/worker-vllm
### Logs
```shell
2023-10-08T23:48:58.162625747Z File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/transformers/configuration_utils.py", line 675, in _get_config_dict
2023-10-08T23:48:58.162627886Z resolved_config_file = cached_file(
2023-10-08T23:48:58.162629374Z File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/transformers/utils/hub.py", line 429, in cached_file
2023-10-08T23:48:58.162730559Z resolved_file = hf_hub_download(
2023-10-08T23:48:58.162745609Z File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 110, in _inner_fn
2023-10-08T23:48:58.162754559Z validate_repo_id(arg_value)
2023-10-08T23:48:58.162757898Z File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 158, in validate_repo_id
2023-10-08T23:48:58.162814363Z raise HFValidationError(
2023-10-08T23:48:58.162818078Z huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/runpod-volume/Mistralic-7B-1-AWQ'. Use `repo_type` argument if needed.
2023-10-08T23:48:58.162819633Z
2023-10-08T23:48:58.162821011Z During handling of the above exception, another exception occurred:
2023-10-08T23:48:58.162822300Z
2023-10-08T23:48:58.162823725Z Traceback (most recent call last):
2023-10-08T23:48:58.162825121Z File "/root/handler.py", line 45, in <module>
2023-10-08T23:48:58.162882887Z llm = AsyncLLMEngine.from_engine_args(engine_args)
2023-10-08T23:48:58.162885347Z File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 480, in from_engine_args
2023-10-08T23:48:58.162974437Z engine_configs = engine_args.create_engine_configs()
2023-10-08T23:48:58.162976606Z File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/vllm/engine/arg_utils.py", line 174, in create_engine_configs
2023-10-08T23:48:58.163023514Z model_config = ModelConfig(self.model, self.tokenizer,
2023-10-08T23:48:58.163025692Z File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/vllm/config.py", line 74, in __init__
2023-10-08T23:48:58.163081539Z self.hf_config = get_config(model, trust_remote_code, revision)
2023-10-08T23:48:58.163083550Z File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/vllm/transformers_utils/config.py", line 27, in get_config
2023-10-08T23:48:58.163115027Z return MistralConfig.from_pretrained(model, revision=revision)
2023-10-08T23:48:58.163116921Z File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/transformers/configuration_utils.py", line 591, in from_pretrained
2023-10-08T23:48:58.163244365Z config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
2023-10-08T23:48:58.163250169Z File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/transformers/configuration_utils.py", line 620, in get_config_dict
2023-10-08T23:48:58.163338403Z config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
2023-10-08T23:48:58.163341533Z File "/root/miniconda3/envs/pytorch/lib/python3.10/site-packages/transformers/configuration_utils.py", line 696, in _get_config_dict
2023-10-08T23:48:58.163438067Z raise EnvironmentError(
2023-10-08T23:48:58.163443403Z OSError: Can't load the configuration of '/runpod-volume/Mistralic-7B-1-AWQ'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure '/runpod-volume/Mistralic-7B-1-AWQ' is the correct path to a directory containing a config.json file
```
### System info
```shell
n/a
```
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26720/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26678 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26678/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26678/comments | https://api.github.com/repos/huggingface/transformers/issues/26678/events | https://github.com/huggingface/transformers/pull/26678 | 1,932,062,990 | PR_kwDOCUB6oc5cNNdg | 26,678 | [`Core Tokenization`] Support a fix for spm fast models | {
"login": "ArthurZucker",
"id": 48595927,
"node_id": "MDQ6VXNlcjQ4NTk1OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/48595927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ArthurZucker",
"html_url": "https://github.com/ArthurZucker",
"followers_url": "https://api.github.com/users/ArthurZucker/followers",
"following_url": "https://api.github.com/users/ArthurZucker/following{/other_user}",
"gists_url": "https://api.github.com/users/ArthurZucker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ArthurZucker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ArthurZucker/subscriptions",
"organizations_url": "https://api.github.com/users/ArthurZucker/orgs",
"repos_url": "https://api.github.com/users/ArthurZucker/repos",
"events_url": "https://api.github.com/users/ArthurZucker/events{/privacy}",
"received_events_url": "https://api.github.com/users/ArthurZucker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26678). All of your documentation changes will be reflected on that endpoint.",
"Oups 😅 just need to document and fx the tests",
"PR is overall ready, just missing CIs, rebase and core review"
] | 1,696 | 1,705 | 1,705 | COLLABORATOR | null | # What does this PR do?
Fixes the behaviour of `T5FastTokenizer` using the `legacy` flag.
Requires `tokenizers==0.15.0`.
This is what it will allows us to do:
```python
from transformers import AutoTokenizer
tok = AutoTokenizer.from_pretrained("t5-base", use_fast = True, from_slow = True, legacy = True)
print(tok.tokenize("Hey </s>. how are you"))
tok = AutoTokenizer.from_pretrained("t5-base", use_fast = True, from_slow = True, legacy = False)
print(tok.tokenize("Hey </s>. how are you"))
```
We add `bos_token="<s>"` to make sure it does not strip left, further amphasizing the issue at hand.
```python
['▁Hey', '▁', '</s>', '▁.', '▁how', '▁are', '▁you']
['▁Hey', '▁', '</s>', '.', '▁how', '▁are', '▁you']
```
The extra space that was always added to the eos is now gone.
This is fully backward compatible and can be saved / push to the hub. The metaspace rust object was not broken, and the argument can also be easily set `tok._tokenizer.pre_tokenizer.legacy = legacy`.
fixes #26318, fixes #26455, fixes #25881, fixes #27900 and more to come
# TODOs
- [x] Add tests/update existing ones
- [x] support T5 only for now
- [ ] add some documentation about this prepend scheme | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26678/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/26678/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26678",
"html_url": "https://github.com/huggingface/transformers/pull/26678",
"diff_url": "https://github.com/huggingface/transformers/pull/26678.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26678.patch",
"merged_at": 1705577514000
} |
https://api.github.com/repos/huggingface/transformers/issues/26677 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26677/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26677/comments | https://api.github.com/repos/huggingface/transformers/issues/26677/events | https://github.com/huggingface/transformers/pull/26677 | 1,932,017,457 | PR_kwDOCUB6oc5cND_b | 26,677 | Fix docstring CLIP configs | {
"login": "isaac-chung",
"id": 48971969,
"node_id": "MDQ6VXNlcjQ4OTcxOTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/48971969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isaac-chung",
"html_url": "https://github.com/isaac-chung",
"followers_url": "https://api.github.com/users/isaac-chung/followers",
"following_url": "https://api.github.com/users/isaac-chung/following{/other_user}",
"gists_url": "https://api.github.com/users/isaac-chung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isaac-chung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isaac-chung/subscriptions",
"organizations_url": "https://api.github.com/users/isaac-chung/orgs",
"repos_url": "https://api.github.com/users/isaac-chung/repos",
"events_url": "https://api.github.com/users/isaac-chung/events{/privacy}",
"received_events_url": "https://api.github.com/users/isaac-chung/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26677). All of your documentation changes will be reflected on that endpoint."
] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue) https://github.com/huggingface/transformers/issues/26638 for `CLIPSegTextConfig`, `CLIPSegVisionConfig`, `CLIPTextConfig`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ydshieh
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26677/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26677/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26677",
"html_url": "https://github.com/huggingface/transformers/pull/26677",
"diff_url": "https://github.com/huggingface/transformers/pull/26677.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26677.patch",
"merged_at": 1696847642000
} |
https://api.github.com/repos/huggingface/transformers/issues/26676 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26676/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26676/comments | https://api.github.com/repos/huggingface/transformers/issues/26676/events | https://github.com/huggingface/transformers/pull/26676 | 1,932,008,228 | PR_kwDOCUB6oc5cNCBd | 26,676 | Fix docstring for `CLIPImageProcessor` | {
"login": "isaac-chung",
"id": 48971969,
"node_id": "MDQ6VXNlcjQ4OTcxOTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/48971969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isaac-chung",
"html_url": "https://github.com/isaac-chung",
"followers_url": "https://api.github.com/users/isaac-chung/followers",
"following_url": "https://api.github.com/users/isaac-chung/following{/other_user}",
"gists_url": "https://api.github.com/users/isaac-chung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isaac-chung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isaac-chung/subscriptions",
"organizations_url": "https://api.github.com/users/isaac-chung/orgs",
"repos_url": "https://api.github.com/users/isaac-chung/repos",
"events_url": "https://api.github.com/users/isaac-chung/events{/privacy}",
"received_events_url": "https://api.github.com/users/isaac-chung/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh resolved merge conflicts - ready for ✅ Thanks!",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26676). All of your documentation changes will be reflected on that endpoint."
] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue) https://github.com/huggingface/transformers/issues/26638 for `CLIPImageProcessor`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ydshieh
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26676/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26676/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26676",
"html_url": "https://github.com/huggingface/transformers/pull/26676",
"diff_url": "https://github.com/huggingface/transformers/pull/26676.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26676.patch",
"merged_at": 1696854164000
} |
https://api.github.com/repos/huggingface/transformers/issues/26675 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26675/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26675/comments | https://api.github.com/repos/huggingface/transformers/issues/26675/events | https://github.com/huggingface/transformers/pull/26675 | 1,931,994,873 | PR_kwDOCUB6oc5cM_UL | 26,675 | Add early stopping for Bark generation via logits processor | {
"login": "isaac-chung",
"id": 48971969,
"node_id": "MDQ6VXNlcjQ4OTcxOTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/48971969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isaac-chung",
"html_url": "https://github.com/isaac-chung",
"followers_url": "https://api.github.com/users/isaac-chung/followers",
"following_url": "https://api.github.com/users/isaac-chung/following{/other_user}",
"gists_url": "https://api.github.com/users/isaac-chung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isaac-chung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isaac-chung/subscriptions",
"organizations_url": "https://api.github.com/users/isaac-chung/orgs",
"repos_url": "https://api.github.com/users/isaac-chung/repos",
"events_url": "https://api.github.com/users/isaac-chung/events{/privacy}",
"received_events_url": "https://api.github.com/users/isaac-chung/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ylacombe maybe we can continue the [conversation](https://github.com/huggingface/transformers/issues/26672#issuecomment-1753060623) here.",
"> Ideally, we'd have another test on BarkSemanticModelTest, but I'm not sure how to proceed yet.\r\nDo you have any ideas?\r\n\r\nI'm not entirely sure. Maybe we could assert outputs from `self.model.generate` with the new arg somehow?\r\n\r\n> could be possibly passed to BarkModel.generate kwargs without causing issues\r\n\r\nTo confirm that we support this, maybe we should add to `BarkModelIntegrationTests.test_generate_end_to_end_with_sub_models_args` as well?",
"Let's try to do both!",
"I think I managed to add to `BarkModelIntegrationTests` without issues. But I'd like to align on how to proceed with `BarkSemanticModelTest`. Specifically:\r\n1. Only a few tests assert the outputs. As I'm unsure what to expect, I might print the outputs and assert those\r\n2. I've been manually trying to fill in `BarkSemanticGenerationConfig` so that the `generate()` call does not fail. Not sure if there's a more efficient way.",
"@ylacombe thanks! Regarding 1, let's take `BarkModelIntegrationTests.test_generate_end_to_end_with_sub_models_args` for example, the test does not assert any outputs and it simply runs `.generate()`. Would that be fine here?",
"Let's try to find a case where the semantic model has to stop. You can get inspiration from that test: https://github.com/huggingface/transformers/blob/d085662c596066bad82942b1e15819f675cbd15e/tests/models/bark/test_modeling_bark.py#L904-L921\r\n\r\nSo basically, an example where, the same seed, the last output tokens are different, do you think it's possible?",
"If we set `min_eos_p` to anything that's non-zero, we only get the eos_token (set to 10000 for open-end generation). Here is what passed.\r\n```\r\n @slow\r\n def test_generate_semantic_early_stop(self):\r\n input_ids = self.inputs\r\n\r\n # fmt: off\r\n # check first ids\r\n expected_output_ids = [10000]\r\n # fmt: on\r\n\r\n self.semantic_generation_config.min_eos_p = 0.05\r\n\r\n # greedy decoding\r\n with torch.no_grad():\r\n output_ids = self.model.semantic.generate(\r\n **input_ids,\r\n do_sample=False,\r\n temperature=1.0,\r\n semantic_generation_config=self.semantic_generation_config,\r\n )\r\n\r\n self.assertListEqual(output_ids[0, : len(expected_output_ids)].tolist(), expected_output_ids)\r\n``` \r\nIs that what you have in mind?",
"Oh, that seems weird, have you tried with another generation strategy ? (i.e `do_sample=True, temperature=...`)? If you have the same results, it's probably on the logit processor side!",
"Yep getting the same output even after varying those parameters. In terms of expected outputs, \r\n- for this test, is it accurate that we want a list of similar output_ids, just potentially shorter?\r\n- for the logit processor unit test, is the current expected output wrong?\r\n\r\nJust wondering what would be a good way to start debugging this as audio generation is super new to me 🙏 ",
"No worries here, let's do this step by step:\r\n1. Is the logic correct?\r\n2. If going from `min_eos_p` to `np.log(self.min_eos_p)` is correct? might be the wrong logic here, meaning that `probability(eos_token)` might be different from `torch.log(scores[:, eos_token_id]`. In fact I strongly suspect it is where we are wrong. is it log softmax or softmax? In anyways, we should make sure it is correct for every generation strategy\r\n3. make sure `test_early_stop_processor` is correct \r\n\r\nIf we still have this behavior after verifying those steps, we can think of other ways of veryfing it ",
"Good plan, thank you! \r\n\r\nRe: logic, seems like we can obtain log_prob from scores by\r\n```python\r\nlogprobs = torch.nn.functional.log_softmax(scores.float(), dim=-1)\r\n```\r\nThen we can compare `log(min_eos_p)` directly.\r\n\r\nRe: test_early_stop_processor, I realized that I had not implemented the suggestion:\r\n> So basically, an example where, the same seed, the last output tokens are different, do you think it's possible?\r\n\r\nI've done so now that I've read things through again :D . I do wonder if it will be flaky at times due to its stochastic nature?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26675). All of your documentation changes will be reflected on that endpoint.",
"Gentle nudge @ylacombe to let this bubble back up",
"Regarding using a stopping criteria, I don't think it's possible at the moment -> quoting #26672\r\n> * Ideally, we'd add a [custom stopping criteria](https://huggingface.co/docs/transformers/v4.34.0/en/internal/generation_utils#transformers.StoppingCriteria), but as indicated in [custom stopping_critriea function doesn't receive logits scores (receives None instead) #23674](https://github.com/huggingface/transformers/issues/23674) it's not yet possible to use [`scores`](https://huggingface.co/docs/transformers/v4.34.0/en/internal/generation_utils#transformers.StoppingCriteria.__call__.scores) without setting `return_dict_in_generate=True, output_scores=True `.\r\n\r\n",
"It receives None because `output_scores` and `return_dict` are not properly set",
"Yes of course, but don't you think users should have the liberty to set `output_scores` and `return_dict` as they want ?",
"For sure. So the goal here is by default to always stop early? (actually not returning the scores might be better in terms of memory ?) \r\nWhat I mean is that the stopping criterias are meant to be used that way 😉 ",
"> For sure. So the goal here is by default to always stop early? (actually not returning the scores might be better in terms of memory ?) What I mean is that the stopping criterias are meant to be used that way 😉\r\n\r\nYes this is the goal here. Totally agree on the stopping criteria usage! Actually I haven't find a stopping criteria which uses `scores` yet, maybe because of the limitation of having to use `return_dict_in_generate=True, output_scores=True`. #23674 is a discussion on this and I believe this is under @gante's radar! What do you recommend in the meantime ?",
"Hey @ylacombe / @ArthurZucker , please let me know if there's anything else I can do to further this PR.",
"btw, regarding it being a logits processor vs stopping criteria: it is my impression that we want to generate an EOS token under the conditions defined here. Since we want to generate a token, it has to be a logits processor.\r\n\r\n(the main difference between them is that the stopping criteria stops generation right away, and doesn't add any new token -- for batched generation, this can make a big difference)",
"@ylacombe I've run this command and all tests are passing ✅ \r\n```\r\nRUN_SLOW=yes python -m unittest tests.models.bark.test_modeling_bark.BarkModelIntegrationTests\r\n```",
"LGTM ! Let's wait for all the check to pass and merge then! Thanks for the great work here and all the iterations!",
"Thank you all again for your guidance and patience 🙏 much appreciated."
] | 1,696 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue) https://github.com/huggingface/transformers/issues/26672
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26675/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26675/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26675",
"html_url": "https://github.com/huggingface/transformers/pull/26675",
"diff_url": "https://github.com/huggingface/transformers/pull/26675.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26675.patch",
"merged_at": 1698401253000
} |
https://api.github.com/repos/huggingface/transformers/issues/26674 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26674/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26674/comments | https://api.github.com/repos/huggingface/transformers/issues/26674/events | https://github.com/huggingface/transformers/pull/26674 | 1,931,896,342 | PR_kwDOCUB6oc5cMpOe | 26,674 | [docstring] fix docstring DPRConfig | {
"login": "AVAniketh0905",
"id": 95468529,
"node_id": "U_kgDOBbC78Q",
"avatar_url": "https://avatars.githubusercontent.com/u/95468529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AVAniketh0905",
"html_url": "https://github.com/AVAniketh0905",
"followers_url": "https://api.github.com/users/AVAniketh0905/followers",
"following_url": "https://api.github.com/users/AVAniketh0905/following{/other_user}",
"gists_url": "https://api.github.com/users/AVAniketh0905/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AVAniketh0905/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AVAniketh0905/subscriptions",
"organizations_url": "https://api.github.com/users/AVAniketh0905/orgs",
"repos_url": "https://api.github.com/users/AVAniketh0905/repos",
"events_url": "https://api.github.com/users/AVAniketh0905/events{/privacy}",
"received_events_url": "https://api.github.com/users/AVAniketh0905/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26674). All of your documentation changes will be reflected on that endpoint.",
"> Thank you!\r\n\r\nThank you very much!!!"
] | 1,696 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Fixes #26638 only for `DPRConfig`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26674/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26674/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26674",
"html_url": "https://github.com/huggingface/transformers/pull/26674",
"diff_url": "https://github.com/huggingface/transformers/pull/26674.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26674.patch",
"merged_at": 1697192023000
} |
https://api.github.com/repos/huggingface/transformers/issues/26673 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26673/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26673/comments | https://api.github.com/repos/huggingface/transformers/issues/26673/events | https://github.com/huggingface/transformers/issues/26673 | 1,931,835,185 | I_kwDOCUB6oc5zJXcx | 26,673 | Improve bark batch generation | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2392046359,
"node_id": "MDU6TGFiZWwyMzkyMDQ2MzU5",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20Second%20Issue",
"name": "Good Second Issue",
"color": "dd935a",
"default": false,
"description": "Issues that are more difficult to do than \"Good First\" issues - give it a try if you want!"
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | null | [] | [
"Hello, I'd like to work on this if no one is working on it.\r\n",
"Hello Yoach,\nSince issue [#26672](https://github.com/huggingface/transformers/issues/26672) seems to already be taken on by a few people, I'd love to try and work on this one !\nThanks, Julien ",
"Hey, @ylacombe. If I understand correctly the input to fine model is output from coarse model. The dimensions are all the same for each of the sample in the batch. How can I specifically get lengths of each sample from here?",
"Hi @divinit7 and @JulienAjdenbaum, thanks for your motivation here!\r\nWould one of you would like to start working on this?\r\n\r\nBTW, there is this new issue #26921, which is quite interesting to work on as well.\r\n\r\n> If I understand correctly the input to fine model is output from coarse model. The dimensions are all the same for each of the sample in the batch. How can I specifically get lengths of each sample from here?\r\n\r\nNice question, basically the length of each samples can be derived from here:\r\nhttps://github.com/huggingface/transformers/blob/ad08137e473e00702fc3088a119da7026e1cb025/src/transformers/models/bark/modeling_bark.py#L958-L964\r\n\r\nYou can get each sample length by looking at the number of `coarse_generation_config.coarse_semantic_pad_token` ([which replace the eos_token_ids of the semantic model](https://github.com/huggingface/transformers/blob/ad08137e473e00702fc3088a119da7026e1cb025/src/transformers/models/bark/modeling_bark.py#L944-L949)) and substract it from `semantic_output.shape[1]`\r\n",
"Hello, I was a bit under the water in the past few days/weeks but I should have time to advance on this issue this week. I'll let you know when I have some code to show/questions !",
"Hello @JulienAjdenbaum, no worries::! let me know if I can help you",
"Hi @JulienAjdenbaum, any update on this ?"
] | 1,696 | 1,699 | 1,699 | COLLABORATOR | null | ### Feature request
Currently, when several samples are transmitted to Bark at the same time, i.e. during batch generation, the generated audios all have the same duration, which extends the length of the shortest audios to the longest audio in the batch.
This problem can be solved by keeping track of sample lengths as `Bark.generate` is called up, and then outputting the audio lengths with the generated audios at the end of the call.
### Motivation
This is something that would greatly improve Bark batch generation and that has been requested a few times.
### Your contribution
A few pointers:
- This starts in [`BarkFineModel.generate`](https://github.com/huggingface/transformers/blob/b71f20a7c9f3716d30f6738501559acf863e2c5c/src/transformers/models/bark/modeling_bark.py#L1350-L1354), when computing `n_remove_from_end`, which is the number of tokens to remove, i.e a proxy of the length of the longest of the batch. You can add here a tracker of the length of each sample of the batch.
- You can then pass it to the output of `BarkFineModel.generate`, and finally to the output of `BarkModel.generate`
To keep backward compatibility, this feature should be enabled with an additional input boolean. Finally, we should ensure result consistency as compared to samples being generated one by one in the test suite [here](https://github.com/huggingface/transformers/blob/897a826d830e8b1e03eb482b165b5d88a7a08d5f/tests/models/bark/test_modeling_bark.py#L871). | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26673/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26673/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26672 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26672/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26672/comments | https://api.github.com/repos/huggingface/transformers/issues/26672/events | https://github.com/huggingface/transformers/issues/26672 | 1,931,828,835 | I_kwDOCUB6oc5zJV5j | 26,672 | Improve Bark Generation | {
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 2648621985,
"node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request",
"name": "Feature request",
"color": "FBCA04",
"default": false,
"description": "Request for a new feature"
}
] | closed | false | {
"login": "isaac-chung",
"id": 48971969,
"node_id": "MDQ6VXNlcjQ4OTcxOTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/48971969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isaac-chung",
"html_url": "https://github.com/isaac-chung",
"followers_url": "https://api.github.com/users/isaac-chung/followers",
"following_url": "https://api.github.com/users/isaac-chung/following{/other_user}",
"gists_url": "https://api.github.com/users/isaac-chung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isaac-chung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isaac-chung/subscriptions",
"organizations_url": "https://api.github.com/users/isaac-chung/orgs",
"repos_url": "https://api.github.com/users/isaac-chung/repos",
"events_url": "https://api.github.com/users/isaac-chung/events{/privacy}",
"received_events_url": "https://api.github.com/users/isaac-chung/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "isaac-chung",
"id": 48971969,
"node_id": "MDQ6VXNlcjQ4OTcxOTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/48971969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/isaac-chung",
"html_url": "https://github.com/isaac-chung",
"followers_url": "https://api.github.com/users/isaac-chung/followers",
"following_url": "https://api.github.com/users/isaac-chung/following{/other_user}",
"gists_url": "https://api.github.com/users/isaac-chung/gists{/gist_id}",
"starred_url": "https://api.github.com/users/isaac-chung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/isaac-chung/subscriptions",
"organizations_url": "https://api.github.com/users/isaac-chung/orgs",
"repos_url": "https://api.github.com/users/isaac-chung/repos",
"events_url": "https://api.github.com/users/isaac-chung/events{/privacy}",
"received_events_url": "https://api.github.com/users/isaac-chung/received_events",
"type": "User",
"site_admin": false
}
] | [
"I'd love to take this on if no one else is working on this at the moment.",
"I'd be glad to work on this if it's getting assigned.",
"Hey @isaac-chung and @GauriTr, thanks for your motivation here! \r\n@isaac-chung I see that you already started to work on this, feel free to ping me if there are any issues or questions!\r\n\r\n@GauriTr, have you checked #26673 ? If you feel up to the task, let me know",
"@ylacombe Thanks, I was wondering about testing - specifically whether it would be sufficient at this point to test the default config by adding it via `BarkSemanticModelTester`. Other than that, I think the PR is ready if you don't mind taking a look.",
"I'd also love to contribute, if someone would want to make testing an issue in itself, or any other feature addition.",
"Completed by #26675, thanks @isaac-chung again !"
] | 1,696 | 1,698 | 1,698 | COLLABORATOR | null | ### Feature request
According to this [notebook](https://github.com/suno-ai/bark/blob/773624d26db84278a55aacae9a16d7b25fbccab8/notebooks/long_form_generation.ipynb#L160) from the original [bark repo](https://github.com/suno-ai/bark/):
> Advanced Long-Form Generation
Somtimes Bark will hallucinate a little extra audio at the end of the prompt. We can solve this issue by lowering the threshold for bark to stop generating text. We use the min_eos_p kwarg in generate_text_semantic
This rests on an early stopping strategy yet to be implemented in the transformers implementation of Bark. `min_eos_p` is used during the first sub-model generation, i.e `BarkSemanticModel.generate`. It would be great to add this feature to improve Bark generation.
### Motivation
Sometimes, generated speech have weird artefact at the end of the speech, due to this missing feature.
### Your contribution
Some pointers:
- Where `min_eos_p` is used in the original code: [here](https://github.com/suno-ai/bark/blob/773624d26db84278a55aacae9a16d7b25fbccab8/bark/generation.py#L377-L514).
- In the HF implementation, the semantic model is called during generation [here](https://github.com/huggingface/transformers/blob/b71f20a7c9f3716d30f6738501559acf863e2c5c/src/transformers/models/bark/modeling_bark.py#L1583-L1588), which then called `BarkSemanticModel.generate` [here](https://github.com/huggingface/transformers/blob/b71f20a7c9f3716d30f6738501559acf863e2c5c/src/transformers/models/bark/modeling_bark.py#L799-L805).
- Ideally, we'd add a [custom stopping criteria](https://huggingface.co/docs/transformers/v4.34.0/en/internal/generation_utils#transformers.StoppingCriteria), but as indicated in #23674 it's not yet possible to use [`scores`](https://huggingface.co/docs/transformers/v4.34.0/en/internal/generation_utils#transformers.StoppingCriteria.__call__.scores) without setting `return_dict_in_generate=True, output_scores=True `.
- Instead, let's add [here](https://github.com/huggingface/transformers/blob/b71f20a7c9f3716d30f6738501559acf863e2c5c/src/transformers/models/bark/modeling_bark.py#L802) a [logits processor](https://huggingface.co/docs/transformers/v4.34.0/en/internal/generation_utils#logitsprocessor) which will set every tokens probability other than the EOS token to `-inf`, when the probabiliy of the EOS token id is superior to `min_eos_p`.
`min_eos_p` should be set by default to None (to keep backward compatibility) in `BarkSemanticGenerationConfig` [here](https://github.com/huggingface/transformers/blob/v4.34.0/src/transformers/models/bark/generation_configuration_bark.py) and could be possibly passed to `BarkModel.generate` kwargs without causing issues!
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26672/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26672/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26671 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26671/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26671/comments | https://api.github.com/repos/huggingface/transformers/issues/26671/events | https://github.com/huggingface/transformers/issues/26671 | 1,931,826,834 | I_kwDOCUB6oc5zJVaS | 26,671 | param `document-question-answering` raises `KeyError` in pipeline | {
"login": "regmibijay",
"id": 23026528,
"node_id": "MDQ6VXNlcjIzMDI2NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/23026528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/regmibijay",
"html_url": "https://github.com/regmibijay",
"followers_url": "https://api.github.com/users/regmibijay/followers",
"following_url": "https://api.github.com/users/regmibijay/following{/other_user}",
"gists_url": "https://api.github.com/users/regmibijay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/regmibijay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/regmibijay/subscriptions",
"organizations_url": "https://api.github.com/users/regmibijay/orgs",
"repos_url": "https://api.github.com/users/regmibijay/repos",
"events_url": "https://api.github.com/users/regmibijay/events{/privacy}",
"received_events_url": "https://api.github.com/users/regmibijay/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Looks like this has been mitigated with current version."
] | 1,696 | 1,697 | 1,697 | NONE | null | ### System Info
Running following combination on a windows Machine with properly set up WSL + Cuda environment.
Machine:
- CPU: i7-9700K + 16GB DDR4 RAM
- GPU: RTX 3060 TI 8GB GDDR6
Libraries: Python 3.10
```
tensorflow==2.14.0
tensorflow-estimator==2.14.0
tensorflow-io-gcs-filesystem==0.31.0
transformers==4.34.0
```
Currently with
```python
pipeline("document-question-answering")
```
I see
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/bijayregmi/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 797, in pipeline
model, default_revision = get_default_model_and_revision(targeted_task, framework, task_options)
File "/home/bijayregmi/.local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 405, in get_default_model_and_revision
return default_models[framework]
KeyError: 'tf'
```
With [recommended](https://huggingface.co/tasks/document-question-answering) way I see
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/bijayregmi/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 834, in pipeline
framework, model = infer_framework_load_model(
File "/home/bijayregmi/.local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 282, in infer_framework_load_model
raise ValueError(
ValueError: Could not load model naver-clova-ix/donut-base-finetuned-docvqa with any of the following classes: (<class 'transformers.models.vision_encoder_decoder.modeling_tf_vision_encoder_decoder.TFVisionEncoderDecoderModel'>,). See the original errors:
while loading with TFVisionEncoderDecoderModel, an error is thrown:
Traceback (most recent call last):
File "/home/bijayregmi/.local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 269, in infer_framework_load_model
model = model_class.from_pretrained(model, **kwargs)
File "/home/bijayregmi/.local/lib/python3.10/site-packages/transformers/models/vision_encoder_decoder/modeling_tf_vision_encoder_decoder.py", line 336, in from_pretrained
return super().from_pretrained(pretrained_model_name_or_path, *model_args, **kwargs)
File "/home/bijayregmi/.local/lib/python3.10/site-packages/transformers/modeling_tf_utils.py", line 2823, in from_pretrained
raise EnvironmentError(
OSError: naver-clova-ix/donut-base-finetuned-docvqa does not appear to have a file named tf_model.h5 but there is a file for PyTorch weights. Use `from_pt=True` to load this model from those weights.
```
I think this might be an unintended bug and core issue is not isolateable with minimal research for my end user perspective.
Please feel free to comment on this if you require further information.
### Who can help?
@Narsil
### Information
- [X] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from transformers import pipeline
pipeline("document-question-answering")
```
### Expected behavior
Expected behaviour would be:
- library searches for predefined, pretrained model for given pipeline and downloads it from HF
- Library initializes weights and other params
- Library is ready to accept further instructions | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26671/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26671/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26670 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26670/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26670/comments | https://api.github.com/repos/huggingface/transformers/issues/26670/events | https://github.com/huggingface/transformers/issues/26670 | 1,931,780,914 | I_kwDOCUB6oc5zJKMy | 26,670 | Custom type symbols add by LlamaTokenizer, LlamaTokenizerFast fails to tokenize them correctly. | {
"login": "qiugen",
"id": 3462941,
"node_id": "MDQ6VXNlcjM0NjI5NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3462941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qiugen",
"html_url": "https://github.com/qiugen",
"followers_url": "https://api.github.com/users/qiugen/followers",
"following_url": "https://api.github.com/users/qiugen/following{/other_user}",
"gists_url": "https://api.github.com/users/qiugen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qiugen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qiugen/subscriptions",
"organizations_url": "https://api.github.com/users/qiugen/orgs",
"repos_url": "https://api.github.com/users/qiugen/repos",
"events_url": "https://api.github.com/users/qiugen/events{/privacy}",
"received_events_url": "https://api.github.com/users/qiugen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"cc @ArthurZucker ",
"Hey, this is a duplicate of #27132, #26871, #25232, #23833. The token's `normalized` field should be set to `False` instead of `True`",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,702 | 1,702 | NONE | null | ### System Info
transformers-cli env
```
- `transformers` version: 4.33.0.dev0 │
- Platform: Linux-4.18.0-240.el8.x86_64-x86_64-with-glibc2.10 │
- Python version: 3.8.0 │
- Huggingface_hub version: 0.15.1 │
- Safetensors version: 0.3.1 │
- Accelerate version: 0.20.3 │
- Accelerate config: not found │
- PyTorch version (GPU?): 2.0.1+cu117 (True) │
- Tensorflow version (GPU?): not installed (NA) │
- Flax version (CPU?/GPU?/TPU?): not installed (NA) │
- Jax version: not installed │
- JaxLib version: not installed │
- Using GPU in script?: <fill in> │
- Using distributed or parallel set-up in script?: <fill in>
```
The vocabulary was extended using the following reference method.
https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/scripts/merge_tokenizer/merge_tokenizers.py
parts of USER_DEFINED character list show as follow
```
<pad>
[INST]
[/INST]
[REWARD]
<<SYS>>
<</SYS>>
[CLS]
[SEP]
[RESERVED_0]
[RESERVED_1]
[RESERVED_2]
[RESERVED_3]
[RESERVED_4]
[RESERVED_5]
[RESERVED_6]
[RESERVED_7]
[RESERVED_8]
[RESERVED_9]
```
I found that LlamaTokenizerFast cannot segment correctly
```
text='''<<SYS>>背诵<</SYS>>[SEP]白日依山尽,黄河入海流。欲穷千里目,更上一层楼。通过学习这首诗掌握不了䏦䮰
The primary use of LLaMA is research on large language models, including[CLS]
[INST]test[/INST]
test of [REWARD]
test sp1 [RESERVED_0]
test sp2 [RESERVED_1]
test sp2 [RESERVED_11]
<pad>
```
The text show before, Tokenized by LlamaTokenizer, the result is:
```
['▁', '<<SYS>>', '背', '诵', '<</SYS>>', '[SEP]', '白', '日', '依', '山', '尽', ',', '黄', '河', '入', '海', '流', '。', '欲', '穷', '千', '里', '目', ',', '更', '上', '一', '层', '楼', '。', '通', '过', '学', '习', '这', '首', '诗', '掌', '握', '不', '了', '<0xE4>', '<0x8F>', '<0xA6>', '<0xE4>', '<0xAE>', '<0xB0>', '<0x0A>', 'The', '▁primary', '▁use', '▁of', '▁L', 'La', 'MA', '▁is', '▁research', '▁on', '▁large', '▁language', '▁models', ',', '▁including', '[CLS]', '<0x0A>', '[INST]', 'test', '[/INST]', '<0x0A>', 'test', '▁of', '▁', '[REWARD]', '<0x0A>', 'test', '▁sp', '1', '▁', '[RESERVED_0]', '▁', '<0x0A>', 'test', '▁sp', '2', '▁', '[RESERVED_1]', '<0x0A>', 'test', '▁sp', '2', '▁[', 'RE', 'SER', 'V', 'ED', '_', '1', '1', ']', '<0x0A>', '<pad>', '<0x0A>']
```
Tokenized by LlamaTokenizerFast, by LlamaTokenizer, the result is:
```
['<s>', '▁<<', 'SY', 'S', '>>', '背', '诵', '<', '</', 'SY', 'S', '>>', '[', 'SE', 'P', ']', '白', '日', '依', '山', '尽', ',', '黄', '河', '入', '海', '流', '。', '欲', '穷', '千', '里', '目', ',', '更', '上', '一', '层', '楼', '。', '通', '过', '学', '习', '这', '首', '诗', '掌', '握', '不', '了', '<0xE4>', '<0x8F>', '<0xA6>', '<0xE4>', '<0xAE>', '<0xB0>', '<0x0A>', 'The', '▁primary', '▁use', '▁of', '▁L', 'La', 'MA', '▁is', '▁research', '▁on', '▁large', '▁language', '▁models', ',', '▁including', '[', 'CL', 'S', ']', '<0x0A>', '[', 'INST', ']', 'test', '[', '/', 'INST', ']', '<0x0A>', 'test', '▁of', '▁[', 'RE', 'W', 'ARD', ']', '<0x0A>', 'test', '▁sp', '1', '▁[', 'RE', 'SER', 'V', 'ED', '_', '0', ']', '▁', '<0x0A>', 'test', '▁sp', '2', '▁[', 'RE', 'SER', 'V', 'ED', '_', '1', ']', '<0x0A>', 'test', '▁sp', '2', '▁[', 'RE', 'SER', 'V', 'ED', '_', '1', '1', ']', '<0x0A>', '<', 'pad', '>', '<0x0A>']
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
<img width="801" alt="image" src="https://github.com/huggingface/transformers/assets/3462941/1a220dbd-d162-4f5a-9625-f0603e426d0b">
### Expected behavior
for text, LlamaTokenizer, LlamaTokenizerFast work the same at USER_DEFINED. tokenizer the same pieces. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26670/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26670/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26669 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26669/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26669/comments | https://api.github.com/repos/huggingface/transformers/issues/26669/events | https://github.com/huggingface/transformers/pull/26669 | 1,931,760,380 | PR_kwDOCUB6oc5cMNRs | 26,669 | [docstring] Fix docstring for `LlamaTokenizer` and `LlamaTokenizerFast` | {
"login": "minhoryang",
"id": 1270855,
"node_id": "MDQ6VXNlcjEyNzA4NTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1270855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/minhoryang",
"html_url": "https://github.com/minhoryang",
"followers_url": "https://api.github.com/users/minhoryang/followers",
"following_url": "https://api.github.com/users/minhoryang/following{/other_user}",
"gists_url": "https://api.github.com/users/minhoryang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/minhoryang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/minhoryang/subscriptions",
"organizations_url": "https://api.github.com/users/minhoryang/orgs",
"repos_url": "https://api.github.com/users/minhoryang/repos",
"events_url": "https://api.github.com/users/minhoryang/events{/privacy}",
"received_events_url": "https://api.github.com/users/minhoryang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26669). All of your documentation changes will be reflected on that endpoint.",
"@ydshieh ready for ✅ thanks!",
"@LysandreJik could you assign this to @ydshieh? Thank you.",
"`$ python3 -m pytest -v --make-reports doc_tests_gpu --doctest-modules src/transformers/models/llama/tokenization_llama_fast.py -sv`\r\n> **[PASSED src/transformers/models/llama/tokenization_llama_fast.py::transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast](https://app.circleci.com/pipelines/github/huggingface/transformers/75029/workflows/2bc31261-2db7-4bb0-a657-fe0e69138af8/jobs/951332/artifacts)**"
] | 1,696 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
Fixes #26638 only for `LlamaTokenizer` and `LlamaTokenizerFast`.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ArthurZucker and @younesbelkada
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26669/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26669/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26669",
"html_url": "https://github.com/huggingface/transformers/pull/26669",
"diff_url": "https://github.com/huggingface/transformers/pull/26669.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26669.patch",
"merged_at": 1697036612000
} |
https://api.github.com/repos/huggingface/transformers/issues/26668 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26668/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26668/comments | https://api.github.com/repos/huggingface/transformers/issues/26668/events | https://github.com/huggingface/transformers/pull/26668 | 1,931,735,176 | PR_kwDOCUB6oc5cMIKv | 26,668 | Add OWLv2, bis | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, any idea when this PR will land? Eagerly waiting on playing with the model in HF!",
"@NielsRogge, I think there might be a bug in `post_process_object_detection()` when using cuda device.\r\nYou can see that `scale_fct` is created as cpu tensor here:\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/owlv2/image_processing_owlv2.py#L507\r\nBut here `scale_fct` is moved to device:\r\nhttps://github.com/huggingface/transformers/blob/main/src/transformers/models/owlv2/image_processing_owlv2.py#L568",
"Thanks for spotting @assafbot would you be able to open a PR regarding this?"
] | 1,696 | 1,699 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
This PR adds OWLv2 in a way that is more compliant to the Transformers-philosophy, meaning one paper = one model = one file. Rather than modifying the existing OWL-ViT (v1), this PR adds a new standalone Owlv2ForObjectDetection model. It copies 99% of the v1 model, only modifying the object detection head.
Follow-up of #26379
To do:
- [x] fix image processor
- [x] add doc tests
- [x] verify objectness logits shape | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26668/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26668/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26668",
"html_url": "https://github.com/huggingface/transformers/pull/26668",
"diff_url": "https://github.com/huggingface/transformers/pull/26668.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26668.patch",
"merged_at": 1697208084000
} |
https://api.github.com/repos/huggingface/transformers/issues/26667 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26667/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26667/comments | https://api.github.com/repos/huggingface/transformers/issues/26667/events | https://github.com/huggingface/transformers/pull/26667 | 1,931,679,669 | PR_kwDOCUB6oc5cL9Af | 26,667 | Adding LM-Infinite support to Llama models | {
"login": "Glaciohound",
"id": 29673775,
"node_id": "MDQ6VXNlcjI5NjczNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/29673775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Glaciohound",
"html_url": "https://github.com/Glaciohound",
"followers_url": "https://api.github.com/users/Glaciohound/followers",
"following_url": "https://api.github.com/users/Glaciohound/following{/other_user}",
"gists_url": "https://api.github.com/users/Glaciohound/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Glaciohound/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Glaciohound/subscriptions",
"organizations_url": "https://api.github.com/users/Glaciohound/orgs",
"repos_url": "https://api.github.com/users/Glaciohound/repos",
"events_url": "https://api.github.com/users/Glaciohound/events{/privacy}",
"received_events_url": "https://api.github.com/users/Glaciohound/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"LM-Infinite seems not equals to standard XFM attention ablitiy in very long range context, due to its attention mask may not overlap all tokens it that scenario",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,700 | 1,700 | NONE | null | # What does this PR do?
In this PR, we implement [LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models](https://browse.arxiv.org/abs/2308.16137) proposed in August 2023 on Llama models, which removes length limits of large language models, and enables them to generate to infinite lengths with intact performance similar to training time, without any parameter updates. Results show that LM-Infinite can encode as long as 128k tokens on a single A100 GPU, and allows generating to infinite tokens, thanks to its $O(n)$ time and space complexity for encoding and $O(1)$ complexity for decoding. Interestingly, later [StreamingLLM](https://github.com/mit-han-lab/streaming-llm) recently also observed alike results on a similar technique.
This implementation is related to and in response to [an issue](https://github.com/huggingface/transformers/issues/26553) discussing about integrating LM-Infinite into Huggingface Transformers.
This implementation defaults back to exactly the original behaviors if the sequence length is less than 4k, and only differ in performance when sequence is longer. In this way, normal users will feel minimal differences. All usages of functionalities remain the same. Moreover, this implementation encodes sequence one at a time, similar to other normal Transformer models, compared with StreamingLLM which encodes token by token (i.e., 2k `forward` passes for encoding context of length 2k).
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. It is related to [this issue](https://github.com/huggingface/transformers/issues/26553).
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- @ArthurZucker
- @Rocketknight1
- @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26667/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26667/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26667",
"html_url": "https://github.com/huggingface/transformers/pull/26667",
"diff_url": "https://github.com/huggingface/transformers/pull/26667.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26667.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26666 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26666/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26666/comments | https://api.github.com/repos/huggingface/transformers/issues/26666/events | https://github.com/huggingface/transformers/pull/26666 | 1,931,639,264 | PR_kwDOCUB6oc5cL1De | 26,666 | [docstring] Fix docstring for `CodeLlamaTokenizerFast` | {
"login": "Bojun-Feng",
"id": 102875484,
"node_id": "U_kgDOBiHBXA",
"avatar_url": "https://avatars.githubusercontent.com/u/102875484?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bojun-Feng",
"html_url": "https://github.com/Bojun-Feng",
"followers_url": "https://api.github.com/users/Bojun-Feng/followers",
"following_url": "https://api.github.com/users/Bojun-Feng/following{/other_user}",
"gists_url": "https://api.github.com/users/Bojun-Feng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Bojun-Feng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Bojun-Feng/subscriptions",
"organizations_url": "https://api.github.com/users/Bojun-Feng/orgs",
"repos_url": "https://api.github.com/users/Bojun-Feng/repos",
"events_url": "https://api.github.com/users/Bojun-Feng/events{/privacy}",
"received_events_url": "https://api.github.com/users/Bojun-Feng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"**Issues Encountered:**\r\n\r\n**System Configuration:**\r\n- **Machine:** Mac with M1 Chip\r\n- **Environment:** New Conda environment using Python `3.9.18`\r\n\r\n**Installation Issue:**\r\n- Command Executed: `pip install -e \".[dev]\"`\r\n- Error Encountered:\r\n ```\r\n INFO: pip is looking at multiple versions of transformers[dev] to determine which version is compatible with other requirements. This could take a while.\r\n ERROR: Could not find a version that satisfies the requirement tensorflow-text<2.15; extra == \"dev\" (from transformers[dev]) (from versions: none)\r\n ERROR: No matching distribution found for tensorflow-text<2.15; extra == \"dev\"\r\n ```\r\n- Solution: Replaced `\".[dev]\"` with `\".[quality]\"` and the module was successfully installed.\r\n\r\n**Execution Issue:**\r\n- Command Executed: `python3 utils/check_docstrings.py --fix_and_overwrite`\r\n- Error Encountered:\r\n ```\r\n Traceback (most recent call last):\r\n File \".../transformers/src/transformers/utils/import_utils.py\", line 1282, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.__name__)\r\n File \".../python3.9/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\r\n File \".../transformers/src/transformers/models/deta/modeling_deta.py\", line 52, in <module>\r\n from torchvision.ops.boxes import batched_nms\r\n File \".../site-packages/torchvision/__init__.py\", line 6, in <module>\r\n from torchvision import _meta_registrations, datasets, io, models, ops, transforms, utils\r\n File \".../site-packages/torchvision/datasets/__init__.py\", line 1, in <module>\r\n from ._optical_flow import FlyingChairs, FlyingThings3D, HD1K, KittiFlow, Sintel\r\n File \".../site-packages/torchvision/datasets/_optical_flow.py\", line 10, in <module>\r\n from PIL import Image\r\n ModuleNotFoundError: No module named 'PIL'\r\n ```\r\n- Solution: Used `python` instead of `python3` and the file executed successfully.\r\n- However, only doc strings in `CodeLlamaTokenizerFast` were modified. CI tests failed for `CodeLlamaTokenizer`.\r\n\r\n**Next Steps:**\r\n- Add `CodeLlamaTokenizer` back into `OBJECTS_TO_IGNORE`.\r\n- See if doc strings of `CodeLlamaTokenizerFast` are updated correctly and can pass the CI.",
"Hi @ydshieh,\r\n\r\nDoc strings of `CodeLlamaTokenizerFast` seems to be properly updated, and this PR is ready to merge.\r\n\r\nHowever, the above dependency issues were a bit confusing and prevented me from updating the doc strings of `CodeLlamaTokenizer`. Am I setting up the dev environment incorrectly? Any information would be much appreciated!",
"cc @ydshieh ",
"Update on issues encountered: `pip install -e \".[dev-torch]\"` solved the issue for me.",
"Regarding environment, I am not using Mac, and it seems tensorflow related stuffs are in the way (`tensorflow-text`).\r\n\r\nDo you have `tensorflow` and/or `tensorflow-text` installed? What are their versions?",
"> Regarding environment, I am not using Mac, and it seems tensorflow related stuffs are in the way (`tensorflow-text`).\r\n> \r\n> Do you have `tensorflow` and/or `tensorflow-text` installed? What are their versions?\r\n\r\nI have `tensorflow` at version `2.14.0`. I do not have `tensorflow-text`.",
"Maybe you can remove the 3 places of `tensorflow-text` in the file `setup.py` (do not commit) and try install with `.[dev]` again, and see if this could resolve issue. But we can move on to merge this PR just with `CodeLlamaTokenizerFast`. We can leave `CodeLlamaTokenizer` for another PR.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26666). All of your documentation changes will be reflected on that endpoint.",
"> Maybe you can remove the 3 places of `tensorflow-text` in the file `setup.py` (do not commit) and try install with `.[dev]` again, and see if this could resolve issue. But we can move on to merge this PR just with `CodeLlamaTokenizerFast`. We can leave `CodeLlamaTokenizer` for another PR.\r\n\r\nLet's just merge it. I can create another PR for `CodeLlamaTokenizer`.",
"Could you resolve the conflicts so that I can approve the pR? 😉 ",
"> Could you resolve the conflicts so that I can approve the pR? 😉\r\n\r\nI am a bit unsure what you mean by conflict. I have resolved the conversation, if that is what you meant. Please let me know if you would like me to add back the suffix_first parameter.",
"@Bojun-Feng \r\n\r\nCheck `This branch has conflicts that must be resolved` below. It has conflict to the main branch as the (long) list of `OBJECTS_TO_IGNORE` is updated frequently due to the event.",
"> @Bojun-Feng\r\n> \r\n> Check `This branch has conflicts that must be resolved` below. It has conflict to the main branch as the (long) list of `OBJECTS_TO_IGNORE` is updated frequently due to the event.\r\n\r\nConflict resolved, thank you for the clarification!"
] | 1,696 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26638
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26666/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26666/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26666",
"html_url": "https://github.com/huggingface/transformers/pull/26666",
"diff_url": "https://github.com/huggingface/transformers/pull/26666.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26666.patch",
"merged_at": 1697443906000
} |
https://api.github.com/repos/huggingface/transformers/issues/26665 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26665/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26665/comments | https://api.github.com/repos/huggingface/transformers/issues/26665/events | https://github.com/huggingface/transformers/issues/26665 | 1,931,627,000 | I_kwDOCUB6oc5zIkn4 | 26,665 | How to resume training from a checkpoint when training LoRA using deepspeed? | {
"login": "Sakurakdx",
"id": 48399040,
"node_id": "MDQ6VXNlcjQ4Mzk5MDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/48399040?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sakurakdx",
"html_url": "https://github.com/Sakurakdx",
"followers_url": "https://api.github.com/users/Sakurakdx/followers",
"following_url": "https://api.github.com/users/Sakurakdx/following{/other_user}",
"gists_url": "https://api.github.com/users/Sakurakdx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sakurakdx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sakurakdx/subscriptions",
"organizations_url": "https://api.github.com/users/Sakurakdx/orgs",
"repos_url": "https://api.github.com/users/Sakurakdx/repos",
"events_url": "https://api.github.com/users/Sakurakdx/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sakurakdx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"I found that after loading the optimizer, the device of the step in its status is cpu, but it should be on cuda.\r\n\r\n",
"It seems torch requires `step` to be on the cpu device, but deepspeed requires it to be in the same device?\r\n![Uploading image.png…]()\r\n",
"Gentle ping @pacman100 @muellerzr ",
"I believe https://github.com/huggingface/transformers/pull/27825 should fix the issue",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,704 | 1,704 | NONE | null | ### System Info
- `transformers` version: 4.34.0.dev0
- Platform: Linux-5.4.143.bsk.7-amd64-x86_64-with-glibc2.28
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: 0.21.0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: DEEPSPEED
- use_cpu: False
- num_processes: 1
- machine_rank: 0
- num_machines: 1
- rdzv_backend: static
- same_network: True
- main_training_function: main
- deepspeed_config: {'deepspeed_config_file': 'none', 'zero3_init_flag': False}
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- dynamo_config: {'dynamo_backend': 'INDUCTOR', 'dynamo_mode': 'default', 'dynamo_use_dynamic': False, 'dynamo_use_fullgraph': False}
- PyTorch version (GPU?): 2.0.1+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@pacman100 @ArthurZucker @younesbelkada
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
When using deepspeed to train LoRA, I want to use the resume function of the trainer. The sample code is as follows:
```python
causal_model = AutoModelForCausalLM.from_pretrained(model_pretrained_path_,
config=config,
trust_remote_code=True,
low_cpu_mem_usage=self.params["low_cpu_mem_usage"])
peft = PEFT(config_path_or_data=peft_params)
causal_model = peft.get_peft_model(model=causal_model)
trainer = Seq2SeqTrainer(
params=trainer_params,
model=causal_model,
tokenizer=tokenizer,
train_dataset=train_dataset,
data_collator=data_collator,
eval_dataset=eval_dataset,
compute_metrics=dataset_t.metric,
)
trainer.train(resume_from_checkpoint=True)
```
deepspeed config as follows:
```json
{
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"bf16": {
"enabled": "auto"
},
"zero_optimization": {
"stage": 2,
"cpu_offload": false,
"allgather_partitions": true,
"allgather_bucket_size": 5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 5e8,
"contiguous_gradients": true
},
"gradient_accumulation_steps": "auto",
"gradient_clipping": "auto",
"steps_per_print": 50,
"train_batch_size": "auto",
"train_micro_batch_size_per_gpu": "auto",
"wall_clock_breakdown": false
}
```
### Expected behavior
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument state_steps in method wrapper_CUDA___fused_adamw_) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26665/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26665/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26664 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26664/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26664/comments | https://api.github.com/repos/huggingface/transformers/issues/26664/events | https://github.com/huggingface/transformers/pull/26664 | 1,931,562,096 | PR_kwDOCUB6oc5cLlHV | 26,664 | [docstring] Fix docstrings for `UniSpeechConfig`, `UniSpeechForCTC`, `UniSpeechSatConfig`, `UniSpeechSatForCTC` and `Wav2Vec2ForCTC` | {
"login": "gizemt",
"id": 10080892,
"node_id": "MDQ6VXNlcjEwMDgwODky",
"avatar_url": "https://avatars.githubusercontent.com/u/10080892?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gizemt",
"html_url": "https://github.com/gizemt",
"followers_url": "https://api.github.com/users/gizemt/followers",
"following_url": "https://api.github.com/users/gizemt/following{/other_user}",
"gists_url": "https://api.github.com/users/gizemt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gizemt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gizemt/subscriptions",
"organizations_url": "https://api.github.com/users/gizemt/orgs",
"repos_url": "https://api.github.com/users/gizemt/repos",
"events_url": "https://api.github.com/users/gizemt/events{/privacy}",
"received_events_url": "https://api.github.com/users/gizemt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi @gizemt I think the commit history is messed up when you merged the branch into this PR. You can see irrelevant changes to this PR shown up in `Files changed`.\r\n\r\nCould you resolve this please?",
"oops sorry, messed up rebase. fixed it now. @ydshieh ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26664). All of your documentation changes will be reflected on that endpoint.",
"good to go @ydshieh thank you!"
] | 1,696 | 1,697 | 1,697 | CONTRIBUTOR | null | Fixes #26638
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@ydshieh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26664/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26664/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26664",
"html_url": "https://github.com/huggingface/transformers/pull/26664",
"diff_url": "https://github.com/huggingface/transformers/pull/26664.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26664.patch",
"merged_at": 1697122295000
} |
https://api.github.com/repos/huggingface/transformers/issues/26663 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26663/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26663/comments | https://api.github.com/repos/huggingface/transformers/issues/26663/events | https://github.com/huggingface/transformers/pull/26663 | 1,931,435,468 | PR_kwDOCUB6oc5cLMNZ | 26,663 | fix a typo in flax T5 attention - attention_mask variable is misnamed | {
"login": "giganttheo",
"id": 71786646,
"node_id": "MDQ6VXNlcjcxNzg2NjQ2",
"avatar_url": "https://avatars.githubusercontent.com/u/71786646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/giganttheo",
"html_url": "https://github.com/giganttheo",
"followers_url": "https://api.github.com/users/giganttheo/followers",
"following_url": "https://api.github.com/users/giganttheo/following{/other_user}",
"gists_url": "https://api.github.com/users/giganttheo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/giganttheo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/giganttheo/subscriptions",
"organizations_url": "https://api.github.com/users/giganttheo/orgs",
"repos_url": "https://api.github.com/users/giganttheo/repos",
"events_url": "https://api.github.com/users/giganttheo/events{/privacy}",
"received_events_url": "https://api.github.com/users/giganttheo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26663). All of your documentation changes will be reflected on that endpoint.",
"> Very nice @giganttheo! Thanks for identifying the bug and proposing the fix 🤗 Confirming that the slow tests pass following the fix? As per [#26564 (comment)](https://github.com/huggingface/transformers/issues/26564#issuecomment-1751779825) If so, then this all LGTM!\r\n\r\nThe slow tests are passing for t5 and longt5:\r\n\r\n`RUN_SLOW=1 pytest -sv tests/models/t5/test_modeling_flax_t5.py::FlaxT5ModelIntegrationTests`\r\n\r\noutputs: `================== 6 passed, 4 warnings in 331.38s (0:05:31) ===================`\r\n\r\nand for the longT5 version:\r\n\r\n`RUN_SLOW=1 pytest -sv tests/models/longt5/test_modeling_flax_longt5.py::FlaxLongT5ModelIntegrationTests`\r\n\r\noutputs: `=================== 1 passed, 1 warning in 401.61s (0:06:41) ===================`",
"Awesome - thanks for confirming! Requesting a final review from @ArthurZucker"
] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | # What does this PR do?
Fixes a typo in the Flax code for T5 model.
There is a typo in the Attention module of the Flax version of T5, where the attention_mask updated by the `_concatenate_to_cache` method should override the previous attention_mask but does not because of a misnamed variable.
Fixes #26564
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
@sanchit-gandhi | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26663/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26663/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26663",
"html_url": "https://github.com/huggingface/transformers/pull/26663",
"diff_url": "https://github.com/huggingface/transformers/pull/26663.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26663.patch",
"merged_at": 1696962993000
} |
https://api.github.com/repos/huggingface/transformers/issues/26662 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26662/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26662/comments | https://api.github.com/repos/huggingface/transformers/issues/26662/events | https://github.com/huggingface/transformers/pull/26662 | 1,931,406,098 | PR_kwDOCUB6oc5cLGMd | 26,662 | fixed Docstring for configuration_bert.py file | {
"login": "neet-14",
"id": 105306415,
"node_id": "U_kgDOBkbZLw",
"avatar_url": "https://avatars.githubusercontent.com/u/105306415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neet-14",
"html_url": "https://github.com/neet-14",
"followers_url": "https://api.github.com/users/neet-14/followers",
"following_url": "https://api.github.com/users/neet-14/following{/other_user}",
"gists_url": "https://api.github.com/users/neet-14/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neet-14/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neet-14/subscriptions",
"organizations_url": "https://api.github.com/users/neet-14/orgs",
"repos_url": "https://api.github.com/users/neet-14/repos",
"events_url": "https://api.github.com/users/neet-14/events{/privacy}",
"received_events_url": "https://api.github.com/users/neet-14/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey - I was participating in this as well, just thought I would try to help. I see CI is failing - did you run `pip install -e \".[dev]\"` before making changes, and `make fixup` after? Looks like it only failed because of a dependency issue. Happy to chat more if you want an extra pair of eyes on this!",
"@abzdel thanks for looking into this. it was showing \"-n was unexpected at this time.\r\nmake: *** [Makefile:10: modified_only_fixup] Error 255\r\n\" when i tried to run `make fixup` command. \r\nand when run `python3.11 utils/check_docstrings.py --fix_and_overwrite`\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Lenovo\\OneDrive\\Desktop\\Git-hub\\fork-transformers\\utils\\check_docstrings.py\", line 44, in <module>\r\n from check_repo import ignore_undocumented\r\n File \"C:\\Users\\Lenovo\\OneDrive\\Desktop\\Git-hub\\fork-transformers\\utils\\check_repo.py\", line 44, in <module>\r\n from transformers import is_flax_available, is_tf_available, is_torch_available\r\nModuleNotFoundError: No module named 'transformers'\r\n",
"I think i have also made a typo in the file `src/transformers/models/bert/configuration_bert.py` in place of `is_decoder` i have written `id_decoder`. \r\n@abzdel can you suggest what other docstrings i have to change in the `src/transformers/models/bert/modeling_bert.py` file",
"@neet-14 I have a busy next few days but I will get back to you on this soon, we'll figure this out!",
"For `BertModel`, see how `UniSpeechSatForCTC` is fixed in this PR #26664",
"> \"-n was unexpected at this time.\r\n> make: *** [Makefile:10: modified_only_fixup] Error 255\r\n\r\nAre you on a Windows machine? I'm thinking this could be due to the Makefile using commands for a bash shell. If you're in VSCode, you can switch to a bash terminal fairly quickly. If not we can try other options.\r\n\r\n> ModuleNotFoundError: No module named 'transformers'\r\n\r\nIf you run `pip install -e \".[dev]\"`, I would think this would install all your dependencies. If not you could always just try `pip install transformers` and see if that's the only dependency you're missing.\r\n\r\n> can you suggest what other docstrings i have to change\r\n\r\nWhen you run `python3 utils/check_docstrings.py --fix_and_overwrite`, your associated .py file should have <fill_type> or <fill_docstring> everywhere you need to fill in (to find them easily you can ctrl-F the .py file and search for '<'). When doing mine, I searched the rest of the codebase for what these types and docstrings would be and filled them in with the same information & styling.",
"@ydshieh `1e-12` or 1`e-5` which value should be used as the default value for `layer_norm_eps` in bert model.",
"The current docstring use 1e-12 and it is the same in the argument default value. So 1e-12. Any issue with this value?",
"@ydshieh \r\nno there is no issue with 1e-12. but 1e-5 is more stable in gradient explosion cases, so should'nt it be better choice as a default value.",
"The default in the config is usually taken from the original model repository to match the original implementation :-)."
] | 1,696 | 1,696 | 1,696 | NONE | null | # What does this PR do?
fixes #26638 issue
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26662/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26662/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26662",
"html_url": "https://github.com/huggingface/transformers/pull/26662",
"diff_url": "https://github.com/huggingface/transformers/pull/26662.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26662.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26661 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26661/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26661/comments | https://api.github.com/repos/huggingface/transformers/issues/26661/events | https://github.com/huggingface/transformers/pull/26661 | 1,931,394,515 | PR_kwDOCUB6oc5cLD65 | 26,661 | [docstring] Fix docstring for 'BertGenerationConfig' | {
"login": "AdwaitSalankar",
"id": 111136306,
"node_id": "U_kgDOBp_OMg",
"avatar_url": "https://avatars.githubusercontent.com/u/111136306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AdwaitSalankar",
"html_url": "https://github.com/AdwaitSalankar",
"followers_url": "https://api.github.com/users/AdwaitSalankar/followers",
"following_url": "https://api.github.com/users/AdwaitSalankar/following{/other_user}",
"gists_url": "https://api.github.com/users/AdwaitSalankar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AdwaitSalankar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AdwaitSalankar/subscriptions",
"organizations_url": "https://api.github.com/users/AdwaitSalankar/orgs",
"repos_url": "https://api.github.com/users/AdwaitSalankar/repos",
"events_url": "https://api.github.com/users/AdwaitSalankar/events{/privacy}",
"received_events_url": "https://api.github.com/users/AdwaitSalankar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh Can you please review this?",
"Hi @ydshieh, Does this PR look good?",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26661). All of your documentation changes will be reflected on that endpoint."
] | 1,696 | 1,697 | 1,697 | CONTRIBUTOR | null | Fixes #26638
## Before Submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can Review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26661/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26661/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26661",
"html_url": "https://github.com/huggingface/transformers/pull/26661",
"diff_url": "https://github.com/huggingface/transformers/pull/26661.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26661.patch",
"merged_at": 1697122874000
} |
https://api.github.com/repos/huggingface/transformers/issues/26660 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26660/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26660/comments | https://api.github.com/repos/huggingface/transformers/issues/26660/events | https://github.com/huggingface/transformers/pull/26660 | 1,931,392,801 | PR_kwDOCUB6oc5cLDkN | 26,660 | Fixed malapropism error | {
"login": "Zhreyu",
"id": 96978606,
"node_id": "U_kgDOBcfGrg",
"avatar_url": "https://avatars.githubusercontent.com/u/96978606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Zhreyu",
"html_url": "https://github.com/Zhreyu",
"followers_url": "https://api.github.com/users/Zhreyu/followers",
"following_url": "https://api.github.com/users/Zhreyu/following{/other_user}",
"gists_url": "https://api.github.com/users/Zhreyu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Zhreyu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Zhreyu/subscriptions",
"organizations_url": "https://api.github.com/users/Zhreyu/orgs",
"repos_url": "https://api.github.com/users/Zhreyu/repos",
"events_url": "https://api.github.com/users/Zhreyu/events{/privacy}",
"received_events_url": "https://api.github.com/users/Zhreyu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | # What does this PR do?
This PR fixes a minor typo in the code comment. It replaces the word "clone" with "copy" in the comment to improve clarity.
## Before submitting
- [X ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26660/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26660/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26660",
"html_url": "https://github.com/huggingface/transformers/pull/26660",
"diff_url": "https://github.com/huggingface/transformers/pull/26660.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26660.patch",
"merged_at": 1696842297000
} |
https://api.github.com/repos/huggingface/transformers/issues/26659 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26659/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26659/comments | https://api.github.com/repos/huggingface/transformers/issues/26659/events | https://github.com/huggingface/transformers/pull/26659 | 1,931,388,805 | PR_kwDOCUB6oc5cLCyT | 26,659 | Enhanced Model Debugging Tool | {
"login": "guarddogsoft",
"id": 119510961,
"node_id": "U_kgDOBx-XsQ",
"avatar_url": "https://avatars.githubusercontent.com/u/119510961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/guarddogsoft",
"html_url": "https://github.com/guarddogsoft",
"followers_url": "https://api.github.com/users/guarddogsoft/followers",
"following_url": "https://api.github.com/users/guarddogsoft/following{/other_user}",
"gists_url": "https://api.github.com/users/guarddogsoft/gists{/gist_id}",
"starred_url": "https://api.github.com/users/guarddogsoft/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/guarddogsoft/subscriptions",
"organizations_url": "https://api.github.com/users/guarddogsoft/orgs",
"repos_url": "https://api.github.com/users/guarddogsoft/repos",
"events_url": "https://api.github.com/users/guarddogsoft/events{/privacy}",
"received_events_url": "https://api.github.com/users/guarddogsoft/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4720676470,
"node_id": "LA_kwDOCUB6oc8AAAABGV_Odg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/spam",
"name": "spam",
"color": "fbca04",
"default": false,
"description": "Hacktoberfest spam"
}
] | closed | false | null | [] | [
"This is completely unrelated to this repository and has not been discussed in a GitHub issue. I'm closing and marking this off as spam."
] | 1,696 | 1,696 | 1,696 | NONE | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26659/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26659/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26659",
"html_url": "https://github.com/huggingface/transformers/pull/26659",
"diff_url": "https://github.com/huggingface/transformers/pull/26659.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26659.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26658 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26658/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26658/comments | https://api.github.com/repos/huggingface/transformers/issues/26658/events | https://github.com/huggingface/transformers/pull/26658 | 1,931,373,802 | PR_kwDOCUB6oc5cK_3M | 26,658 | [docstring] Fix docstring for `LlamaConfig` | {
"login": "pavaris-pm",
"id": 69553539,
"node_id": "MDQ6VXNlcjY5NTUzNTM5",
"avatar_url": "https://avatars.githubusercontent.com/u/69553539?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pavaris-pm",
"html_url": "https://github.com/pavaris-pm",
"followers_url": "https://api.github.com/users/pavaris-pm/followers",
"following_url": "https://api.github.com/users/pavaris-pm/following{/other_user}",
"gists_url": "https://api.github.com/users/pavaris-pm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pavaris-pm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pavaris-pm/subscriptions",
"organizations_url": "https://api.github.com/users/pavaris-pm/orgs",
"repos_url": "https://api.github.com/users/pavaris-pm/repos",
"events_url": "https://api.github.com/users/pavaris-pm/events{/privacy}",
"received_events_url": "https://api.github.com/users/pavaris-pm/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hey - lmk if you want help fixing CI on this. Looks like it only failed because of a dependency - happy to try to troubleshoot this with you!",
"@abzdel Hi!, thanks for your help. can you please help me fix this CI? I'm not sure which part make test failed. do you have any idea?\r\n\r\nhappy to try to troubleshoot this with you too!",
"@pavaris-pm no worries! When I click the failed task I see this:\r\n\r\n\r\nWhich makes me think you may have forgotten to run `make fixup` before you committed the change. I would go back, run `make fixup`, commit/push the changes (to your branch) again and see if CI passes. CI should automatically re-run on this PR if you do this, but let me know if other bottlenecks occur here.",
"@abzdel Thanks for you help run `make fixup`. That's very helpful for me. I'll help you take a look at it.",
"@pavaris-pm it's no problem, let me know if this fixes the issue!",
"@abzdel i just made a mistake here that cause me to delete the forked repo. i will re-fork repo and do as you recommend. After that, i will mentioned you again with my new PR. Thank you again !"
] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26638 by fixing a typo in docstring of `LlamaConfig`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26658/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26658/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26658",
"html_url": "https://github.com/huggingface/transformers/pull/26658",
"diff_url": "https://github.com/huggingface/transformers/pull/26658.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26658.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26657 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26657/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26657/comments | https://api.github.com/repos/huggingface/transformers/issues/26657/events | https://github.com/huggingface/transformers/pull/26657 | 1,931,368,512 | PR_kwDOCUB6oc5cK-0G | 26,657 | Add Fast model | {
"login": "raghavanone",
"id": 115454562,
"node_id": "U_kgDOBuGyYg",
"avatar_url": "https://avatars.githubusercontent.com/u/115454562?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/raghavanone",
"html_url": "https://github.com/raghavanone",
"followers_url": "https://api.github.com/users/raghavanone/followers",
"following_url": "https://api.github.com/users/raghavanone/following{/other_user}",
"gists_url": "https://api.github.com/users/raghavanone/gists{/gist_id}",
"starred_url": "https://api.github.com/users/raghavanone/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/raghavanone/subscriptions",
"organizations_url": "https://api.github.com/users/raghavanone/orgs",
"repos_url": "https://api.github.com/users/raghavanone/repos",
"events_url": "https://api.github.com/users/raghavanone/events{/privacy}",
"received_events_url": "https://api.github.com/users/raghavanone/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | [
"FYI @rafaelpadilla to keep an eye on this ! ",
"Hi @raghavanone,\r\n\r\nThank you for your contribution :) \r\nPlease, ping me when you're done for a first review.",
"@rafaelpadilla Request you to do a first review.",
"> Did a first pass on the code, looks good already, my main comment is to leverage the AutoBackbone API for this model as well. This would require adding TextNet as a separate, standalone model first, which implements a TextNetBackbone class as well as a TextNetForImageClassification (I saw the authors released checkpoints fine-tuned on ImageNet-1k).\r\n> \r\n> Next, one could use the following in modeling_fast.py:\r\n> \r\n> ```\r\n> from transformers import AutoBackbone\r\n> \r\n> class FastModel(FastPretrainedModel):\r\n> def __init__(self, config):\r\n> self.backbone = AutoBackbone.from_config(config.backbone_config)\r\n> ```\r\n> \r\n> This is used by models which also leverage backbones like DETR, Deformable DETR, MaskFormer, etc.\r\n\r\nThe TextNet checkpoint released does not have the classifier head with it. So if pulled out as separate module, then the module will only have the TextNet backbone, witch out any downstream models from it . Is it okay to have it that way ?",
"@raghavanone We can add both TextNetModel and TextNetForImageClassification. For the backbone, we'd effectively be wrapping the `TextNetModel` as a backbone to be loaded with the `AutoBackbone` api. For TextNetForImageClassification, if a checkpoint doesn't have weights for the classification head, then they will be randomly initialized with a warning. ",
"@NielsRogge @rafaelpadilla @amyeroberts I have changed the implementation to follow backbone style. Requesting for another round of review.",
"@raghavanone Thanks for the continued work on this model! Could you please split the addition of Fast and TextNet into two separate PRs. We can review TextNet first, and once approved and merged we can review this one. This prevents the diffs being too large for review. "
] | 1,696 | 1,706 | null | CONTRIBUTOR | null | Add Fast model
Fix issue #26501 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26657/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26657/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26657",
"html_url": "https://github.com/huggingface/transformers/pull/26657",
"diff_url": "https://github.com/huggingface/transformers/pull/26657.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26657.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26656 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26656/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26656/comments | https://api.github.com/repos/huggingface/transformers/issues/26656/events | https://github.com/huggingface/transformers/pull/26656 | 1,931,324,725 | PR_kwDOCUB6oc5cK2KW | 26,656 | [DOCSTRING] [Wip] Fix docstring DPT-Model | {
"login": "AVAniketh0905",
"id": 95468529,
"node_id": "U_kgDOBbC78Q",
"avatar_url": "https://avatars.githubusercontent.com/u/95468529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AVAniketh0905",
"html_url": "https://github.com/AVAniketh0905",
"followers_url": "https://api.github.com/users/AVAniketh0905/followers",
"following_url": "https://api.github.com/users/AVAniketh0905/following{/other_user}",
"gists_url": "https://api.github.com/users/AVAniketh0905/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AVAniketh0905/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AVAniketh0905/subscriptions",
"organizations_url": "https://api.github.com/users/AVAniketh0905/orgs",
"repos_url": "https://api.github.com/users/AVAniketh0905/repos",
"events_url": "https://api.github.com/users/AVAniketh0905/events{/privacy}",
"received_events_url": "https://api.github.com/users/AVAniketh0905/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | # What does this PR do?
Address #26638
HELP NEEDED!
## Description
I have followed the guide to contributing for #26638 , but couldn't generate anything using
`python3 utils/check_docstrings.py --fix_and_overwrite`.
The output,
```
2023-10-07 12:47:24.603397: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-10-07 12:47:24.603448: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-10-07 12:47:24.603485: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-10-07 12:47:25.616995: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
```
A similar issue: [Tensorflow](https://github.com/tensorflow/tensorflow/issues/62002)
The docstring did not miss anything thus i have removed `DPTModel` from
`utils/check_docstrings.py` > `OBJECTS_TO_IGNORE`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26656/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26656/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26656",
"html_url": "https://github.com/huggingface/transformers/pull/26656",
"diff_url": "https://github.com/huggingface/transformers/pull/26656.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26656.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26655 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26655/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26655/comments | https://api.github.com/repos/huggingface/transformers/issues/26655/events | https://github.com/huggingface/transformers/pull/26655 | 1,931,300,725 | PR_kwDOCUB6oc5cKxZQ | 26,655 | Update output.md | {
"login": "Anuj-Mishraa",
"id": 100562544,
"node_id": "U_kgDOBf52cA",
"avatar_url": "https://avatars.githubusercontent.com/u/100562544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Anuj-Mishraa",
"html_url": "https://github.com/Anuj-Mishraa",
"followers_url": "https://api.github.com/users/Anuj-Mishraa/followers",
"following_url": "https://api.github.com/users/Anuj-Mishraa/following{/other_user}",
"gists_url": "https://api.github.com/users/Anuj-Mishraa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Anuj-Mishraa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Anuj-Mishraa/subscriptions",
"organizations_url": "https://api.github.com/users/Anuj-Mishraa/orgs",
"repos_url": "https://api.github.com/users/Anuj-Mishraa/repos",
"events_url": "https://api.github.com/users/Anuj-Mishraa/events{/privacy}",
"received_events_url": "https://api.github.com/users/Anuj-Mishraa/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4720676470,
"node_id": "LA_kwDOCUB6oc8AAAABGV_Odg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/spam",
"name": "spam",
"color": "fbca04",
"default": false,
"description": "Hacktoberfest spam"
}
] | closed | false | null | [] | [
"What is this PR? It seems completely unrelated to this repository"
] | 1,696 | 1,697 | 1,697 | NONE | null | Added Output example for diffusion image generation model
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26655/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26655/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26655",
"html_url": "https://github.com/huggingface/transformers/pull/26655",
"diff_url": "https://github.com/huggingface/transformers/pull/26655.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26655.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26654 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26654/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26654/comments | https://api.github.com/repos/huggingface/transformers/issues/26654/events | https://github.com/huggingface/transformers/pull/26654 | 1,931,279,598 | PR_kwDOCUB6oc5cKtMS | 26,654 | Fix source_prefix default value | {
"login": "jheitmann",
"id": 25958845,
"node_id": "MDQ6VXNlcjI1OTU4ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/25958845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jheitmann",
"html_url": "https://github.com/jheitmann",
"followers_url": "https://api.github.com/users/jheitmann/followers",
"following_url": "https://api.github.com/users/jheitmann/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jheitmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitmann/subscriptions",
"organizations_url": "https://api.github.com/users/jheitmann/orgs",
"repos_url": "https://api.github.com/users/jheitmann/repos",
"events_url": "https://api.github.com/users/jheitmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/jheitmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26654). All of your documentation changes will be reflected on that endpoint."
] | 1,696 | 1,705 | 1,696 | CONTRIBUTOR | null | # What does this PR do?
This pull request updates the `run_summarization.py` script in the PyTorch implementation to fix the default value of `source_prefix`. Currently, the default value is set to `""`, which can be misleading when fine-tuning T5 models. This PR sets the default value to `None`, aligning it with other implementations and ensuring that users are prompted with a warning when not providing a source prefix.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26653
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- @ArthurZucker
- @younesbelkada
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26654/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26654/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26654",
"html_url": "https://github.com/huggingface/transformers/pull/26654",
"diff_url": "https://github.com/huggingface/transformers/pull/26654.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26654.patch",
"merged_at": 1696963750000
} |
https://api.github.com/repos/huggingface/transformers/issues/26653 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26653/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26653/comments | https://api.github.com/repos/huggingface/transformers/issues/26653/events | https://github.com/huggingface/transformers/issues/26653 | 1,931,276,608 | I_kwDOCUB6oc5zHPFA | 26,653 | Fix source_prefix default value in run_summarization.py | {
"login": "jheitmann",
"id": 25958845,
"node_id": "MDQ6VXNlcjI1OTU4ODQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/25958845?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jheitmann",
"html_url": "https://github.com/jheitmann",
"followers_url": "https://api.github.com/users/jheitmann/followers",
"following_url": "https://api.github.com/users/jheitmann/following{/other_user}",
"gists_url": "https://api.github.com/users/jheitmann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jheitmann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jheitmann/subscriptions",
"organizations_url": "https://api.github.com/users/jheitmann/orgs",
"repos_url": "https://api.github.com/users/jheitmann/repos",
"events_url": "https://api.github.com/users/jheitmann/events{/privacy}",
"received_events_url": "https://api.github.com/users/jheitmann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.35.0.dev0
- Platform: macOS-13.4-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.1.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help?
@ArthurZucker @younes
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Run the following command to reproduce the behavior:
```
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
### Expected behavior
When a user fine-tunes a T5 model without explicitly setting the `source_prefix`, the warning message prompting users to set a `source_prefix` is not displayed. This is because the `source_prefix` is set to its default value of `""` in this PyTorch implementation of the `run_summarization.py` script, which is not equivalent to `None`. This behavior is inconsistent with other implementations, where the default value of `source_prefix` is `None`, and the warning is displayed when not provided by the user. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26653/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/26653/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26652 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26652/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26652/comments | https://api.github.com/repos/huggingface/transformers/issues/26652/events | https://github.com/huggingface/transformers/issues/26652 | 1,931,268,435 | I_kwDOCUB6oc5zHNFT | 26,652 | > | {
"login": "zzzzzero",
"id": 12083809,
"node_id": "MDQ6VXNlcjEyMDgzODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/12083809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zzzzzero",
"html_url": "https://github.com/zzzzzero",
"followers_url": "https://api.github.com/users/zzzzzero/followers",
"following_url": "https://api.github.com/users/zzzzzero/following{/other_user}",
"gists_url": "https://api.github.com/users/zzzzzero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zzzzzero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zzzzzero/subscriptions",
"organizations_url": "https://api.github.com/users/zzzzzero/orgs",
"repos_url": "https://api.github.com/users/zzzzzero/repos",
"events_url": "https://api.github.com/users/zzzzzero/events{/privacy}",
"received_events_url": "https://api.github.com/users/zzzzzero/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,696 | 1,696 | 1,696 | NONE | null | null | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26652/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26652/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26651 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26651/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26651/comments | https://api.github.com/repos/huggingface/transformers/issues/26651/events | https://github.com/huggingface/transformers/pull/26651 | 1,931,266,298 | PR_kwDOCUB6oc5cKqmH | 26,651 | remove the obsolete code related to fairscale FSDP | {
"login": "statelesshz",
"id": 28150734,
"node_id": "MDQ6VXNlcjI4MTUwNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/28150734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/statelesshz",
"html_url": "https://github.com/statelesshz",
"followers_url": "https://api.github.com/users/statelesshz/followers",
"following_url": "https://api.github.com/users/statelesshz/following{/other_user}",
"gists_url": "https://api.github.com/users/statelesshz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/statelesshz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/statelesshz/subscriptions",
"organizations_url": "https://api.github.com/users/statelesshz/orgs",
"repos_url": "https://api.github.com/users/statelesshz/repos",
"events_url": "https://api.github.com/users/statelesshz/events{/privacy}",
"received_events_url": "https://api.github.com/users/statelesshz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@muellerzr and @ArthurZucker could you please take a look at this PR? Thanks :D",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26651). All of your documentation changes will be reflected on that endpoint.",
"@amyeroberts sorry for bothering, could you please review this PR?"
] | 1,696 | 1,698 | 1,698 | CONTRIBUTOR | null | # What does this PR do?
As the title says, this PR introduces two modifications:
1. remove the gradient clipping code related to fairscale FSDP, as fairscale FSDP has been removed. See: https://github.com/huggingface/transformers/pull/25702
2. updated the error message when mixing `--bf16` and `--half_precision_backend apex` as the `cuda_amp` option is no longer available. I'm not sure how to write it more appropriately, so I just use `auto` instead.
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26651/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26651/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26651",
"html_url": "https://github.com/huggingface/transformers/pull/26651",
"diff_url": "https://github.com/huggingface/transformers/pull/26651.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26651.patch",
"merged_at": 1698666903000
} |
https://api.github.com/repos/huggingface/transformers/issues/26650 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26650/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26650/comments | https://api.github.com/repos/huggingface/transformers/issues/26650/events | https://github.com/huggingface/transformers/pull/26650 | 1,931,263,679 | PR_kwDOCUB6oc5cKqE2 | 26,650 | [DOCS] updated docstring for tokenizer | {
"login": "KMJ-007",
"id": 86996507,
"node_id": "MDQ6VXNlcjg2OTk2NTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/86996507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMJ-007",
"html_url": "https://github.com/KMJ-007",
"followers_url": "https://api.github.com/users/KMJ-007/followers",
"following_url": "https://api.github.com/users/KMJ-007/following{/other_user}",
"gists_url": "https://api.github.com/users/KMJ-007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMJ-007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMJ-007/subscriptions",
"organizations_url": "https://api.github.com/users/KMJ-007/orgs",
"repos_url": "https://api.github.com/users/KMJ-007/repos",
"events_url": "https://api.github.com/users/KMJ-007/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMJ-007/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh Can you please review this?\r\n",
"Hi @KMJ-007 \r\n\r\nThank you for your contribution.\r\n\r\nLet's not take a huge number of entries. Either a single entry, or the entries from the same model architecture. So everyone can have the chance to contribute 🙏 ",
"understood!\r\n\r\nclosing this PR and creating new PR for same model architecture"
] | 1,696 | 1,698 | 1,698 | NONE | null | # What does this PR do?
updated docstring for the following :
`BartTokenizerFast`,` BarthezTokenizerFast`, `BertTokenizerFast`, `AlbertTokenizerFast`, `BigBirdTokenizerFast`, `BlenderbotSmallTokenizerFast`, `BlenderbotTokenizerFast` | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26650/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26650/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26650",
"html_url": "https://github.com/huggingface/transformers/pull/26650",
"diff_url": "https://github.com/huggingface/transformers/pull/26650.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26650.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26649 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26649/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26649/comments | https://api.github.com/repos/huggingface/transformers/issues/26649/events | https://github.com/huggingface/transformers/pull/26649 | 1,931,234,493 | PR_kwDOCUB6oc5cKkTH | 26,649 | [docstring] Fix docstring for GPT2Config | {
"login": "KMJ-007",
"id": 86996507,
"node_id": "MDQ6VXNlcjg2OTk2NTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/86996507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMJ-007",
"html_url": "https://github.com/KMJ-007",
"followers_url": "https://api.github.com/users/KMJ-007/followers",
"following_url": "https://api.github.com/users/KMJ-007/following{/other_user}",
"gists_url": "https://api.github.com/users/KMJ-007/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KMJ-007/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KMJ-007/subscriptions",
"organizations_url": "https://api.github.com/users/KMJ-007/orgs",
"repos_url": "https://api.github.com/users/KMJ-007/repos",
"events_url": "https://api.github.com/users/KMJ-007/events{/privacy}",
"received_events_url": "https://api.github.com/users/KMJ-007/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh all checks have passed!\r\n\r\ncan you review this PR\r\n\r\nthankyou",
"closing this, this PR is already fixing this!\r\n\r\nhttps://github.com/huggingface/transformers/pull/26642"
] | 1,696 | 1,696 | 1,696 | NONE | null | # What does this PR do?
fixed doc string for GPT2Config
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26649/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26649/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26649",
"html_url": "https://github.com/huggingface/transformers/pull/26649",
"diff_url": "https://github.com/huggingface/transformers/pull/26649.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26649.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26648 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26648/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26648/comments | https://api.github.com/repos/huggingface/transformers/issues/26648/events | https://github.com/huggingface/transformers/pull/26648 | 1,931,217,011 | PR_kwDOCUB6oc5cKg3D | 26,648 | fix typos in idefics.md | {
"login": "dribnet",
"id": 945979,
"node_id": "MDQ6VXNlcjk0NTk3OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/945979?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dribnet",
"html_url": "https://github.com/dribnet",
"followers_url": "https://api.github.com/users/dribnet/followers",
"following_url": "https://api.github.com/users/dribnet/following{/other_user}",
"gists_url": "https://api.github.com/users/dribnet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dribnet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dribnet/subscriptions",
"organizations_url": "https://api.github.com/users/dribnet/orgs",
"repos_url": "https://api.github.com/users/dribnet/repos",
"events_url": "https://api.github.com/users/dribnet/events{/privacy}",
"received_events_url": "https://api.github.com/users/dribnet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | Two typos found in reviewing this documentation.
1) max_new_tokens=***6*** as ***4*** is not sufficient to generate "Vegetables" as indicated - you will get only "Veget". (incidentally - some mention of how to select this value might be useful as it seems to change in each example)
2) inputs = processor(prompts, return_tensors="pt"))***.to(device)*** as inputs need to be on the same device (as they are in all other examples on the page)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26648/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26648/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26648",
"html_url": "https://github.com/huggingface/transformers/pull/26648",
"diff_url": "https://github.com/huggingface/transformers/pull/26648.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26648.patch",
"merged_at": 1696846683000
} |
https://api.github.com/repos/huggingface/transformers/issues/26647 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26647/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26647/comments | https://api.github.com/repos/huggingface/transformers/issues/26647/events | https://github.com/huggingface/transformers/issues/26647 | 1,931,216,473 | I_kwDOCUB6oc5zHAZZ | 26,647 | add T5 as decoder only | {
"login": "Biyani404198",
"id": 92304955,
"node_id": "U_kgDOBYB2Ow",
"avatar_url": "https://avatars.githubusercontent.com/u/92304955?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Biyani404198",
"html_url": "https://github.com/Biyani404198",
"followers_url": "https://api.github.com/users/Biyani404198/followers",
"following_url": "https://api.github.com/users/Biyani404198/following{/other_user}",
"gists_url": "https://api.github.com/users/Biyani404198/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Biyani404198/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Biyani404198/subscriptions",
"organizations_url": "https://api.github.com/users/Biyani404198/orgs",
"repos_url": "https://api.github.com/users/Biyani404198/repos",
"events_url": "https://api.github.com/users/Biyani404198/events{/privacy}",
"received_events_url": "https://api.github.com/users/Biyani404198/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1843244711,
"node_id": "MDU6TGFiZWwxODQzMjQ0NzEx",
"url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model",
"name": "New model",
"color": "fbca04",
"default": false,
"description": ""
}
] | open | false | null | [] | [] | 1,696 | 1,696 | null | NONE | null | ### Model description
Since T5/ByT5 is an encoder-decoder model, I created a subclass T5DecoderOnlyForCausalLM to use it as decoder only for OCR task:
`from transformers.models.t5.modeling_t5 import T5PreTrainedModel, T5Stack
import torch
import torch.nn as nn
from transformers.modeling_outputs import CausalLMOutputWithCrossAttentions, Seq2SeqLMOutput
class T5DecoderOnlyForCausalLM(T5PreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.shared = nn.Embedding(config.vocab_size, config.d_model)
self.decoder = T5Stack(config, self.shared)
self.lm_head = nn.Linear(config.d_model, config.vocab_size, bias=False)
self.is_decoder = True
config.is_decoder = True
config.use_cache = False
def forward(
self,
input_ids=None,
attention_mask=None,
decoder_attention_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
decoder_input_ids=None,
inputs_embeds=None,
decoder_inputs_embeds=None,
head_mask=None,
use_cache=None,
cross_attn_head_mask=None,
past_key_values=None,
output_attentions=None,
output_hidden_states=None,
return_dict=None,
):
decoder_outputs = self.decoder(
input_ids=input_ids,
attention_mask=decoder_attention_mask,
past_key_values=past_key_values,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
inputs_embeds=decoder_inputs_embeds,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
)
last_hidden_state = decoder_outputs.last_hidden_state
logits = self.lm_head(last_hidden_state)
hidden_states = decoder_outputs.hidden_states
past_key_values = decoder_outputs.past_key_values
attentions = decoder_outputs.attentions
cross_attentions = decoder_outputs.cross_attentions
return CausalLMOutputWithCrossAttentions(
logits = logits,
past_key_values = past_key_values,
hidden_states = hidden_states,
attentions = attentions,
cross_attentions = cross_attentions
)
def prepare_inputs_for_generation(
self,
input_ids,
past_key_values=None,
attention_mask=None,
head_mask=None,
decoder_head_mask=None,
decoder_attention_mask=None,
cross_attn_head_mask=None,
use_cache=None,
encoder_outputs=None,
**kwargs,
):
# cut decoder_input_ids if past is used
if past_key_values is not None:
input_ids = input_ids[:, -1:]
return {
"input_ids": input_ids,
"past_key_values": past_key_values,
"encoder_outputs": encoder_outputs,
"attention_mask": attention_mask,
"head_mask": head_mask,
"decoder_head_mask": decoder_head_mask,
"decoder_attention_mask": decoder_attention_mask,
"cross_attn_head_mask": cross_attn_head_mask,
"use_cache": use_cache,
}
def prepare_decoder_input_ids_from_labels(self, labels: torch.Tensor):
return self._shift_right(labels)`
Im using it now with VisionEncoderDecoderModel:
`tokenizer = ByT5Tokenizer.from_pretrained('google/byt5-base')
image_processor=ViTImageProcessor.from_pretrained('google/vit-base-patch16-224-in21k')
processor = TrOCRProcessor(image_processor=image_processor, tokenizer=tokenizer)
encoder = ViTModel.from_pretrained("google/vit-base-patch16-224")
decoder = T5DecoderOnlyForCausalLM.from_pretrained("google/byt5-base")
model = VisionEncoderDecoderModel(encoder=encoder, decoder=decoder)`
But for each epoch, Im getting same label as prediction. I'm not sure what's going wrong. train args and trainer looks like this:
`training_args = Seq2SeqTrainingArguments(
predict_with_generate=True,
evaluation_strategy="steps",
num_train_epochs=1,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
fp16=False, ##
fp16_full_eval=False,
output_dir="/content/",
logging_steps=2,
save_steps=5,
eval_steps=5,
)`
`def compute_metrics(pred):
labels_ids = pred.label_ids
pred_ids = pred.predictions
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
labels_ids[labels_ids == -100] = tokenizer.pad_token_id
label_str = tokenizer.batch_decode(labels_ids, skip_special_tokens=True)
cer = cer_metric.compute(predictions=pred_str, references=label_str)
return {"cer": cer}`
`trainer = Seq2SeqTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=dataset,
eval_dataset=dataset_val,
data_collator=default_data_collator,
)`
@NielsRogge can you please help me to understand what's going wrong and why for each epoch, generated text for multiple images is same?
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
implementation link for training:
[https://github.com/iitb-research-code/byt5-model/blob/dev/train_hg.py](url)
link for inference:
[https://github.com/iitb-research-code/byt5-model/blob/dev/inference.py](url) | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26647/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26647/timeline | null | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26646 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26646/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26646/comments | https://api.github.com/repos/huggingface/transformers/issues/26646/events | https://github.com/huggingface/transformers/issues/26646 | 1,931,197,624 | I_kwDOCUB6oc5zG7y4 | 26,646 | Unable to achieve the same accuracy between Trainer API and tensorflow models | {
"login": "KerenzaDoxolodeo",
"id": 7535438,
"node_id": "MDQ6VXNlcjc1MzU0Mzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7535438?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KerenzaDoxolodeo",
"html_url": "https://github.com/KerenzaDoxolodeo",
"followers_url": "https://api.github.com/users/KerenzaDoxolodeo/followers",
"following_url": "https://api.github.com/users/KerenzaDoxolodeo/following{/other_user}",
"gists_url": "https://api.github.com/users/KerenzaDoxolodeo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KerenzaDoxolodeo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KerenzaDoxolodeo/subscriptions",
"organizations_url": "https://api.github.com/users/KerenzaDoxolodeo/orgs",
"repos_url": "https://api.github.com/users/KerenzaDoxolodeo/repos",
"events_url": "https://api.github.com/users/KerenzaDoxolodeo/events{/privacy}",
"received_events_url": "https://api.github.com/users/KerenzaDoxolodeo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"Hi @KerenzaDoxolodeo, thanks for raising an issue! \r\n\r\nThis is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,702 | 1,702 | NONE | null | ### System Info
- `transformers` version: 4.33.0
- Platform: Linux-6.1.42+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: 0.22.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0 (True)
- Tensorflow version (GPU?): 2.12.0 (True)
- Flax version (CPU?/GPU?/TPU?): 0.7.2 (gpu)
- Jax version: 0.4.13
- JaxLib version: 0.4.13
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I run xlm-roberta in three implementations:
1) Using TFAutoModelForSequenceClassification
2) Using TFAutoModel, with the classification layer as faithful as possible to huggingface's implementation.
Code : https://www.kaggle.com/code/realdeo/keras-code/settings?scriptVersionId=145530298
3) Using TrainerAPI
Code : https://www.kaggle.com/code/realdeo/fork-of-notebookcb67cb4ef2/notebook?scriptVersionId=145540775
### Expected behavior
I expect the code to have roughly the same accuracy. What happens here is the Trainer API successfully trained after 1 epoch while the tensorflow implementation stuck at predicting the same label. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26646/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26646/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26645 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26645/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26645/comments | https://api.github.com/repos/huggingface/transformers/issues/26645/events | https://github.com/huggingface/transformers/pull/26645 | 1,931,101,469 | PR_kwDOCUB6oc5cKIlL | 26,645 | Adding LlamaInfinite model which implements LM-Infinite on Llama | {
"login": "Glaciohound",
"id": 29673775,
"node_id": "MDQ6VXNlcjI5NjczNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/29673775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Glaciohound",
"html_url": "https://github.com/Glaciohound",
"followers_url": "https://api.github.com/users/Glaciohound/followers",
"following_url": "https://api.github.com/users/Glaciohound/following{/other_user}",
"gists_url": "https://api.github.com/users/Glaciohound/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Glaciohound/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Glaciohound/subscriptions",
"organizations_url": "https://api.github.com/users/Glaciohound/orgs",
"repos_url": "https://api.github.com/users/Glaciohound/repos",
"events_url": "https://api.github.com/users/Glaciohound/events{/privacy}",
"received_events_url": "https://api.github.com/users/Glaciohound/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hello!\r\n\r\nThank you for taking the time and care to implement this. I'm doing some benchmarking as I'm writing this now :)\r\nI do have to say that I would prefer a solution along the lines of what is being discussed in #26553, e.g. creating some `Cache` class and/or compartmentalising the key/query rotation etc. into a specific method so a third-party can cleanly overwrite it. This preference originates for a few reasons:\r\n* package-wide implementation opportunities: It seems feasible to implement this form of improved long-term generation for all LLM architectures. It seems to indeed be quite valuable, so we should aim for that, rather than just for `llama`.\r\n* avoiding code duplication & high maintenance costs: If we indeed support this for all LLM architectures, then we will need to create a FalconInfinite, MistralInfinite, MPTInfinite, etc. etc. etc. This is not maintainable.\r\n* `LlamaInfiniteModel` does not correspond to a new architecture, but to `llama`. With other words, you don't load `llama-infinite` models with the `LlamaInfiniteModel`, you load `llama` models. This is unusual for `transformers`, I believe.\r\n\r\n---\r\n\r\n### Preliminary benchmarking results\r\nAs I was writing this, my [perplexity benchmarking tool from `attention_sinks`](https://github.com/tomaarsen/attention_sinks) made some progress. For reference, I updated it to use `LlamaInfiniteForCausalLM` from this PR, and then fed it the first few thousand tokens of a 65k token book to measure the perplexity over time. For the unaware, a lower perplexity is better, as it is directly tied to the loss across all of the measured tokens.\r\n\r\nI've ran this experiment for:\r\n1. pure `transformers`\r\n2. [`attention_sinks`](https://github.com/tomaarsen/attention_sinks)\r\n3. window attention (in particular, I used `attention_sinks` but I set the number of sinks to 0, meaning that it only does the window attention like normal)\r\n4. `LlamaInfinite` from this PR.\r\n\r\n\r\n\r\n\r\nLet's go over the details:\r\n1. pure `transformers`: For Llama, this implementation fails for two reasons: The VRAM is linear to the input length, making the model infeasible for endless prompting (e.g. an assistant LLM wouldn't work well). The perplexity also shoots up as the model goes beyond 4096 tokens, indicating that the model stopped giving reasonable predictions there.\r\n2. `attention_sinks`: This implementation uses a window size of 1024, of which 4 are sink tokens. The result is a perplexity that stays low and a constant memory usage - ideal for assistant LLM applications, for example.\r\n3. window attention: This is the naive approach for getting constant (i.e. feasible) memory usage, and also uses a window size of 1024. It is clear that this approach fails as soon as the first few tokens are discarded due to the window size.\r\n4. `LlamaInfinite`: The results here are very interesting. The approach shows to be able to mirror the perplexity performance of regular `transformers`, and keep it going beyond 4096 tokens. However, it only seems capable of doing so because of the linear space complexity that mirrors that of `transformers`. From my point of view, this shows that Llama-infinite can likely indeed theoretically keep up fluency indefinitely, but it is just extremely impractical - nobody could *actually* scale a chat-style LLM to respond to thousands of prompts sequentially using this approach. And that is something that people can actually do with `attention_sinks`.\r\n\r\nTo further support my thoughts here, I've also plotted the latencies of using `transformers`, `LlamaInfinite` and `attention_sinks` as a function of the input length. (**Note:** this is a log plot)\r\n\r\n\r\n<details><summary>Click to see a non-log plot</summary>\r\n\r\n\r\n</details>\r\n\r\nAs you can see, both `LlamaInfinite` and `transformers` are equally impractical for long sequences. Sidenote: even before the memory issues cause latency problems, the `LlamaInfinite` implementation is a good bit slower than pure `transformers` or `attention_sinks`, e.g. 11 samples/s vs 15 samples/s on my device (RTX 3090).\r\n\r\nTo summarize my results: I'm not very confident in the benefit that `LlamaInfinite` has over pure `transformers`. It would work wonders if you happen to have a machine with infinite VRAM, but in the real world the memory issues likely become a problem before the perplexity gains become interesting - especially when `attention_sinks` is a very practical alternative.\r\n\r\n- Tom Aarsen",
"Hi Tom Aarsen!\r\n\r\nThank you so much for taking the time for a detailed evaluation!\r\n\r\nI see the point in implementing a plug-in separately for long-term maintenence. I am happy to help in that direction as well (e.g., your efforts in [**attention_sinks**](https://github.com/tomaarsen/attention_sinks)), especially to combine the advantages of both implementations (this and [**LM-Infinite**](https://github.com/Glaciohound/LM-Infinite)). \r\n\r\nTo be more specific:\r\n\r\n- When **encoding** (e.g., reading a document as context or calculating perplexity), if I understand correctly, `attention_sinks` currently inputs tokens one-by-one even if the whole sequence is already there, trading off time complexity for space efficiency. This is not a natural philosophy for most other `Transformers` implementations. `LM-Infinite`, however, additionally supports a *sequence mode* for encoding the sequence one at a time, which surely occupies large space per each operation, but making encoding more time efficient. (About machine, we used A100 so could encode up to 128k tokens once) It can of course support token-by-token feeding as well, if we evaluate in that way. In summary, if we combine these two as two options and smartly decide when to do which, we can potentially achieve a better balance between time and space for users with various resources and needs.\r\n\r\n- When **decoding** or generating, in my understanding, two approaches should theoretically perform identically. We can work together to debug and optimize Figure 2, as I am also trying to interpret why the curve is not smooth but rather discrete. One thing that particularly puzzles me is that, Llama-2 has a natural context window of 4k. Therefore, even `Transformers` and `attention_sinks` should behave the same before 4k (if `attention_sink` does not manually reduce the window further down to 2k) because they both need to attend on all tokens shorter than 4k.\r\n\r\n\r\nAgain, whatever the outcome and final decisions, I see this a great chance for combining and benefiting from both implementations of `LM-Infinite` and `attention_sinks`. I am definitely looking forward to working together and make this approach finally integrated into `Transformers`. If you have any opinions and comments, please feel free to let us know!\r\n\r\nChi Han",
"As a quick comment regarding the decoding section: my experiments using window attention and `attention_sinks` use a window size of 1024, which explains the difference compared between window attention & `attention_sinks` to just `transformers`. \r\nAlso, the VRAM usage curve is likely discrete because it measures the allocated memory, which is a bit of an imprecise measurement of the real usage.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored."
] | 1,696 | 1,699 | 1,699 | NONE | null | # What does this PR do?
In this PR, we implement [LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models](https://browse.arxiv.org/abs/2308.16137) on Llama model proposed in August 2023, which removes length limits of large language models, and enables them to generate to infinite lengths with intact performance similar to training time, without any parameter updates. Results show that LM-Infinite can encode as long as 128k tokens on a single A100 GPU, and allows generating to infinite tokens, thanks to its $O(n)$ time and space complexity for encoding and $O(1)$ complexity for decoding. Interestingly, later [StreamingLLM](https://github.com/mit-han-lab/streaming-llm) recently also observed alike results on a similar technique.
This implementation is related to and in response to [an issue](https://github.com/huggingface/transformers/issues/26553) discussing about integrating LM-Infinite into Huggingface Transformers.
This LlamaInfinite model allows for seamless adaptation from usage of original Llama models, simply by substituting `LlamaForCausalModel.from_pretrained()` with `LlamaInfiniteForCausalLM.from_pretrained()`. All other usages remain the same. This implementation is compatible with all previous Llama model checkpoints without any modifications, so new model checkpoints are needed.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. It is related to [this issue](https://github.com/huggingface/transformers/issues/26553).
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
- @tomaarsen
- @patrickvonplaten
- @LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26645/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26645/timeline | null | true | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26645",
"html_url": "https://github.com/huggingface/transformers/pull/26645",
"diff_url": "https://github.com/huggingface/transformers/pull/26645.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26645.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26644 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26644/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26644/comments | https://api.github.com/repos/huggingface/transformers/issues/26644/events | https://github.com/huggingface/transformers/issues/26644 | 1,930,997,943 | I_kwDOCUB6oc5zGLC3 | 26,644 | dead link to vitdet-base-patch16-224 | {
"login": "dotneet",
"id": 370602,
"node_id": "MDQ6VXNlcjM3MDYwMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/370602?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dotneet",
"html_url": "https://github.com/dotneet",
"followers_url": "https://api.github.com/users/dotneet/followers",
"following_url": "https://api.github.com/users/dotneet/following{/other_user}",
"gists_url": "https://api.github.com/users/dotneet/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dotneet/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dotneet/subscriptions",
"organizations_url": "https://api.github.com/users/dotneet/orgs",
"repos_url": "https://api.github.com/users/dotneet/repos",
"events_url": "https://api.github.com/users/dotneet/events{/privacy}",
"received_events_url": "https://api.github.com/users/dotneet/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@dotneet Hey, I want to work on this issue. Can you assign to me ?",
"First come, first served @Ankit8848, feel free to open a PR.",
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
" @Ankit8848 Is this bug fixed? I can not load the 'google/vitdet-base-patch16-224' model. Thx for your help.\r\n\r\n```\r\nfrom transformers import VitDetConfig, VitDetModel\r\nmodel = VitDetModel.from_pretrained(\"google/vitdet-base-patch16-224\")\r\n```",
"@betterze i don't know, I have not worked on this issue."
] | 1,696 | 1,700 | 1,699 | CONTRIBUTOR | null | This link is dead.
[google/vitdet-base-patch16-224](https://huggingface.co/google/vitdet-base-patch16-224)
https://github.com/huggingface/transformers/blob/897a826d830e8b1e03eb482b165b5d88a7a08d5f/src/transformers/models/vitdet/configuration_vitdet.py#L35 | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26644/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26644/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26643 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26643/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26643/comments | https://api.github.com/repos/huggingface/transformers/issues/26643/events | https://github.com/huggingface/transformers/pull/26643 | 1,930,962,907 | PR_kwDOCUB6oc5cJqch | 26,643 | [WIP] Remove ZeroShotObjectDetectionPipeline from check_docstrings.py | {
"login": "Sparty",
"id": 3923604,
"node_id": "MDQ6VXNlcjM5MjM2MDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3923604?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Sparty",
"html_url": "https://github.com/Sparty",
"followers_url": "https://api.github.com/users/Sparty/followers",
"following_url": "https://api.github.com/users/Sparty/following{/other_user}",
"gists_url": "https://api.github.com/users/Sparty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Sparty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Sparty/subscriptions",
"organizations_url": "https://api.github.com/users/Sparty/orgs",
"repos_url": "https://api.github.com/users/Sparty/repos",
"events_url": "https://api.github.com/users/Sparty/events{/privacy}",
"received_events_url": "https://api.github.com/users/Sparty/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh I'm facing a few problems setting up my dev environment:\r\n\r\n1. Running `pip install -e \".[dev]\"` returns an error, \r\n`ERROR: Could not find a version that satisfies the requirement tensorflow-text<2.15; extra == \"dev\" (from transformers[dev]) (from versions: none)\r\nERROR: No matching distribution found for tensorflow-text<2.15; extra == \"dev\"`\r\n\r\nI have Tensorflow installed, and tried to install tensorflow-text via pip but it looks like that is not support for Mac with Apple chips. I also tried to build and install tensorflow-text from source, but the build fails. Is there a workaround for this?\r\n\r\n2. Running `pip install -e \".[quality]\"` works, but `make fixup` returns an error, \r\n`python utils/update_metadata.py --check-only\r\nTraceback (most recent call last):\r\n File \"/Users/usr/transformers/utils/update_metadata.py\", line 337, in <module>\r\n check_pipeline_tags()\r\n File \"/Users/usr/transformers/utils/update_metadata.py\", line 316, in check_pipeline_tags\r\n model = model[0]\r\nIndexError: tuple index out of range\r\nmake: *** [repo-consistency] Error 1`\r\n\r\nThe above error happens in the main branch as well. Am I missing something?\r\n\r\n3. Running `python3 utils/check_docstrings.py --fix_and_overwrite` returns nothing, and no files are modified, but `python3 utils/check_docstrings.py` return an error,\r\n`Traceback (most recent call last):\r\n File \"/Users/usr/transformers/utils/check_docstrings.py\", line 1270, in <module>\r\n check_docstrings(overwrite=args.fix_and_overwrite)\r\n File \"/Users/usr/transformers/utils/check_docstrings.py\", line 1262, in check_docstrings\r\n raise ValueError(error_message)\r\nValueError: There was at least one problem when checking docstrings of public objects.\r\nThe following objects docstrings do not match their signature. Run `make fix-copies` to fix this.\r\n- TFRegNetForImageClassification\r\n- TFRegNetModel\r\n- ZeroShotObjectDetectionPipeline`\r\n\r\nIf I run `make fix-copies` as suggested above, I get the following error,\r\n`Traceback (most recent call last):\r\n File \"/Users/usr/transformers/utils/check_task_guides.py\", line 86, in <module>\r\n \"asr.md\": transformers_module.models.auto.modeling_auto.MODEL_FOR_CTC_MAPPING_NAMES,\r\n File \"/Users/usr/transformers/src/transformers/utils/import_utils.py\", line 1275, in __getattr__\r\n raise AttributeError(f\"module {self.__name__} has no attribute {name}\")\r\nAttributeError: module transformers.models.auto has no attribute modeling_auto. Did you mean: 'modeling_tf_auto'?\r\nmake: *** [fix-copies] Error 1`\r\n\r\nWhat should I do to fix this?",
"Hi @Sparty There are a lot of issues regarding the environment, and I am not sure how I can help here.\r\n\r\nYou can proably try \"pip install -e .[torch-dev, testing, ]\" first",
"Hello,\r\nI have the exact same error with TFRegNetModel in my pull request #25786.\r\nAlthough I haven't touched this model, just synced my fork.\r\n",
"@Sparty \r\n\r\nWhen I remove `\"ZeroShotObjectDetectionPipeline\"` and run `python utils/check_docstrings.py --fix_and_overwrite`, nothing is changed.\r\n\r\nSo probably this is something I have to take a look.\r\n\r\nIn the meantime, would you like to work on other entries (maybe ignore the `xxx...Pipeline` entries)?",
"@ydshieh Thank you for your help. I have created https://github.com/huggingface/transformers/pull/26771 for CanineConfig entry."
] | 1,696 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #26638
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- text models: @ArthurZucker and @younesbelkada
- vision models: @amyeroberts
- speech models: @sanchit-gandhi
- graph models: @clefourrier
Library:
- flax: @sanchit-gandhi
- generate: @gante
- pipelines: @Narsil
- tensorflow: @gante and @Rocketknight1
- tokenizers: @ArthurZucker
- trainer: @muellerzr and @pacman100
Integrations:
- deepspeed: HF Trainer/Accelerate: @pacman100
- ray/raytune: @richardliaw, @amogkam
- Big Model Inference: @SunMarc
- quantization (bitsandbytes, autogpt): @SunMarc and @younesbelkada
Documentation: @stevhliu and @MKhalusova
HF projects:
- accelerate: [different repo](https://github.com/huggingface/accelerate)
- datasets: [different repo](https://github.com/huggingface/datasets)
- diffusers: [different repo](https://github.com/huggingface/diffusers)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Maintained examples (not research project or legacy):
- Flax: @sanchit-gandhi
- PyTorch: See Models above and tag the person corresponding to the modality of the example.
- TensorFlow: @Rocketknight1
-->
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26643/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26643/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26643",
"html_url": "https://github.com/huggingface/transformers/pull/26643",
"diff_url": "https://github.com/huggingface/transformers/pull/26643.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26643.patch",
"merged_at": null
} |
https://api.github.com/repos/huggingface/transformers/issues/26642 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26642/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26642/comments | https://api.github.com/repos/huggingface/transformers/issues/26642/events | https://github.com/huggingface/transformers/pull/26642 | 1,930,954,138 | PR_kwDOCUB6oc5cJohw | 26,642 | [DOCS] Update docstrings for GPT2 and Whisper tokenizer | {
"login": "McDonnellJoseph",
"id": 90898184,
"node_id": "MDQ6VXNlcjkwODk4MTg0",
"avatar_url": "https://avatars.githubusercontent.com/u/90898184?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/McDonnellJoseph",
"html_url": "https://github.com/McDonnellJoseph",
"followers_url": "https://api.github.com/users/McDonnellJoseph/followers",
"following_url": "https://api.github.com/users/McDonnellJoseph/following{/other_user}",
"gists_url": "https://api.github.com/users/McDonnellJoseph/gists{/gist_id}",
"starred_url": "https://api.github.com/users/McDonnellJoseph/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/McDonnellJoseph/subscriptions",
"organizations_url": "https://api.github.com/users/McDonnellJoseph/orgs",
"repos_url": "https://api.github.com/users/McDonnellJoseph/repos",
"events_url": "https://api.github.com/users/McDonnellJoseph/events{/privacy}",
"received_events_url": "https://api.github.com/users/McDonnellJoseph/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh Does this look good for you?",
"@ydshieh I tried troubleshooting on my side I don't understanding the difference between `vocab_file` and `merges_file`, both [merges.txt](https://huggingface.co/gpt2/blob/main/merges.txt) and [vocab.json](https://huggingface.co/gpt2/blob/main/vocab.json) from the GPT2 model card don't seem to work. Maybe something needs to be updated? I'd be happy to take a look ",
"Hi @McDonnellJoseph It's fine. No need to dive into those 2 files. I requests 2 tiny changes. Once you commit them, we are good to merge the PR.",
"@ydshieh No problem I pushed the requested changes :smile: ",
"Hi, thanks for the commit. May I know why there is irrelevant changes other than pad token id in the last commit pushed?",
"Sorry my linter formatted automatically and I dind't notice it should be fixed now",
"We are very close: just need to `make style` and push the change. `tokenization_whisper.py` has some format issue\r\n\r\n",
"Ok I thing we're good now :smile: ",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26642). All of your documentation changes will be reflected on that endpoint."
] | 1,696 | 1,697 | 1,697 | CONTRIBUTOR | null | # What does this PR do?
- [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
| {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26642/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26642/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26642",
"html_url": "https://github.com/huggingface/transformers/pull/26642",
"diff_url": "https://github.com/huggingface/transformers/pull/26642.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26642.patch",
"merged_at": 1697122860000
} |
https://api.github.com/repos/huggingface/transformers/issues/26641 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26641/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26641/comments | https://api.github.com/repos/huggingface/transformers/issues/26641/events | https://github.com/huggingface/transformers/pull/26641 | 1,930,938,359 | PR_kwDOCUB6oc5cJlHQ | 26,641 | [docstring] Fix docstring for DonutImageProcessor | {
"login": "abzdel",
"id": 55398496,
"node_id": "MDQ6VXNlcjU1Mzk4NDk2",
"avatar_url": "https://avatars.githubusercontent.com/u/55398496?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abzdel",
"html_url": "https://github.com/abzdel",
"followers_url": "https://api.github.com/users/abzdel/followers",
"following_url": "https://api.github.com/users/abzdel/following{/other_user}",
"gists_url": "https://api.github.com/users/abzdel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abzdel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abzdel/subscriptions",
"organizations_url": "https://api.github.com/users/abzdel/orgs",
"repos_url": "https://api.github.com/users/abzdel/repos",
"events_url": "https://api.github.com/users/abzdel/events{/privacy}",
"received_events_url": "https://api.github.com/users/abzdel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"@ydshieh should be all set! Happy to make changes if needed",
"@ydshieh Just fixed this with my most recent commit. Just a heads up - I get the following when I run `make fixup`. I looked into fixing it but I wasn't quite able to - happy to keep investigating this if needed\r\n<img width=\"544\" alt=\"image\" src=\"https://github.com/huggingface/transformers/assets/55398496/30946b54-5fc7-487a-9117-35333ed1134a\">\r\n",
"It's probably a TF issue in your environment. Other people also reported this, but I have no clear idea what's going on here as it works for me as well as on CI runner. For this PR, as CI is green, good to merge.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26641). All of your documentation changes will be reflected on that endpoint."
] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | fixed DonutImageProcessor docstring and removed from OBJECTS_TO_IGNORE in check_docstrings.py | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26641/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26641/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26641",
"html_url": "https://github.com/huggingface/transformers/pull/26641",
"diff_url": "https://github.com/huggingface/transformers/pull/26641.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26641.patch",
"merged_at": 1696861933000
} |
https://api.github.com/repos/huggingface/transformers/issues/26640 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26640/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26640/comments | https://api.github.com/repos/huggingface/transformers/issues/26640/events | https://github.com/huggingface/transformers/pull/26640 | 1,930,572,409 | PR_kwDOCUB6oc5cIVnS | 26,640 | fix links in README.md for the GPT, GPT-2, and Llama2 Models | {
"login": "dcarpintero",
"id": 6709785,
"node_id": "MDQ6VXNlcjY3MDk3ODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6709785?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dcarpintero",
"html_url": "https://github.com/dcarpintero",
"followers_url": "https://api.github.com/users/dcarpintero/followers",
"following_url": "https://api.github.com/users/dcarpintero/following{/other_user}",
"gists_url": "https://api.github.com/users/dcarpintero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dcarpintero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dcarpintero/subscriptions",
"organizations_url": "https://api.github.com/users/dcarpintero/orgs",
"repos_url": "https://api.github.com/users/dcarpintero/repos",
"events_url": "https://api.github.com/users/dcarpintero/events{/privacy}",
"received_events_url": "https://api.github.com/users/dcarpintero/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [] | 1,696 | 1,696 | 1,696 | CONTRIBUTOR | null | # What does this PR do?
Fix broken links as follows:
[-] https://blog.openai.com/language-unsupervised/
[+] https://openai.com/research/language-unsupervised/
[-] https://blog.openai.com/better-language-models/
[+] https://openai.com/research/better-language-models/
[-] https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/XXX
[+] https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/ | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26640/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26640/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26640",
"html_url": "https://github.com/huggingface/transformers/pull/26640",
"diff_url": "https://github.com/huggingface/transformers/pull/26640.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26640.patch",
"merged_at": 1696844084000
} |
https://api.github.com/repos/huggingface/transformers/issues/26639 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26639/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26639/comments | https://api.github.com/repos/huggingface/transformers/issues/26639/events | https://github.com/huggingface/transformers/pull/26639 | 1,930,524,793 | PR_kwDOCUB6oc5cILup | 26,639 | Avoid CI OOM | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,696 | 1,696 | 1,696 | COLLABORATOR | null | # What does this PR do?
The torch and torch pipeline job again (almost) reach the limit of CircleCI runner's RAM (16 G).
torch pipeline job crashed [nightly run here](https://app.circleci.com/pipelines/github/huggingface/transformers/74561/workflows/5c4e2b07-0688-4d86-bac6-85250ebbd741/jobs/944985)
A screenshot (of [another run](https://app.circleci.com/pipelines/github/huggingface/transformers/74427/workflows/d5130b1e-0bec-4b41-867c-7dd61722434e/jobs/942978/resources))
<img width="1061" alt="Screenshot 2023-10-06 175412" src="https://github.com/huggingface/transformers/assets/2521628/2f9dc0b6-3a0d-46a6-88c6-bd1e787f06b6">
Let's use 6 workers for now: I will try to find time to investigate what happens recently. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26639/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26639/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26639",
"html_url": "https://github.com/huggingface/transformers/pull/26639",
"diff_url": "https://github.com/huggingface/transformers/pull/26639.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26639.patch",
"merged_at": 1696844529000
} |
https://api.github.com/repos/huggingface/transformers/issues/26638 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26638/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26638/comments | https://api.github.com/repos/huggingface/transformers/issues/26638/events | https://github.com/huggingface/transformers/issues/26638 | 1,930,469,942 | I_kwDOCUB6oc5zEKI2 | 26,638 | [Community Event] Docstring Sprint | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1990918270,
"node_id": "MDU6TGFiZWwxOTkwOTE4Mjcw",
"url": "https://api.github.com/repos/huggingface/transformers/labels/Good%20First%20Issue",
"name": "Good First Issue",
"color": "bbf794",
"default": false,
"description": ""
},
{
"id": 4608548278,
"node_id": "LA_kwDOCUB6oc8AAAABErDdtg",
"url": "https://api.github.com/repos/huggingface/transformers/labels/HACKTOBERFEST-ACCEPTED",
"name": "HACKTOBERFEST-ACCEPTED",
"color": "FF5733",
"default": false,
"description": ""
}
] | closed | false | null | [] | [
"Hi @ydshieh !, I would like to work on this issue. Can you please assign me? From the list of OBJECTS_TO_IGNORE, I would like to work on \"AlbertModel\"",
"Hi! I would like to work on 'BertGenerationConfig'.",
"Hi @ydshieh !,\r\nI would like to work on 'BartConfig'.",
"Hi @ydshieh, i would like to work on this issue, could assign me 'BertModel'.",
"I'll take DonutImageProcessor!",
"Hi @ydshieh I can take care of `GPT2Config`, `GPT2Tokenize` , `GPT2TokenizerFast`, `WhisperTokenizerFast` and `WhisperTokenizer`. ",
"ZeroShotObjectDetectionPipeline",
"is this issue is open or closed?\r\n",
"i would like to work on `DPRConfig`.",
"I will be working on `SwinModel`.",
"i would like to work on `BartTokenizerFast`, `BarthezTokenizerFast`, `BertTokenizerFast`, `AlbertTokenizerFast`, `BigBirdTokenizerFast`, `BlenderbotSmallTokenizerFast`, `BlenderbotTokenizerFast`",
"I will be working on `LlamaConfig`.",
"I'll work on `UniSpeechConfig`, `UniSpeechForCTC`, `UniSpeechSatConfig`, `UniSpeechSatForCTC`.\r\n\r\nEdit: Also working on `Wav2Vec2ForCTC`.",
"Hi @ydshieh, I'd like to work on `FlaxGPTNeoForCausalLM`, `FlaxGPTNeoModel`, `GPTNeoXConfig`, and `GPTNeoXTokenizerFast`.",
"Interested in working on `AzureOpenAiAgent` and `Blip2VisionConfig` . Just those for now maybe more further down the line.",
"I would like to work on `CodeLlamaTokenizer` and `CodeLlamaTokenizerFast`\r\n\r\n**Update Oct 13:** Also working on `RwkvConfig`",
"I am trying to install dependencies using `python3 -m pip install -e \".[dev]\"` command in a separate conda environment on macbook m1 air but I am getting different errors different times. For example:\r\n\r\n> ERROR: Could not find a version that satisfies the requirement tensorflow-text<2.15; extra == \"dev\" (from transformers[dev]) (from versions: none)\r\n> ERROR: No matching distribution found for tensorflow-text<2.15; extra == \"dev\"\r\n\r\n\r\n> ERROR: Could not find a version that satisfies the requirement decord==0.6.0; extra == \"dev\" (from transformers[dev]) (from versions: none)\r\n> ERROR: No matching distribution found for decord==0.6.0; extra == \"dev\"\r\n\r\n\r\n@ydshieh Could you please help me here.",
"> I am trying to install dependencies using `python3 -m pip install -e \".[dev]\"` command in a separate conda environment on macbook m1 air but I am getting different errors different times. For example:...\r\n\r\nI am facing a similar problem i have referenced it here #26656 ",
"> I am trying to install dependencies using `python3 -m pip install -e \".[dev]\"` command in a separate conda environment on macbook m1 air but I am getting different errors different times. For example:...\r\n\r\nI am also facing a similar problem, also Mac with m1 chip. Referenced at #26666 ",
"Hi folks, there's actually no need to run `python3 -m pip install -e \".[dev]\"` cause this will attempt to install [all the dependencies](https://github.com/huggingface/transformers/blob/897a826d830e8b1e03eb482b165b5d88a7a08d5f/setup.py#L381-L389) on your environment, including Flax, PyTorch, TensorFlow, etc.\r\n\r\nUsually for your PR you won't need all of those, so I recommend to run `python3 -m pip install -e \".[dev-torch]\"` in case you're working on a PR that involves PyTorch model for instance.",
"I made a PR for `LlamaTokenizer` and `LlamaTokenizerFast`.",
"I'll take \r\n- `CLIPImageProcessor`\r\n- `CLIPSegTextConfig`, `CLIPSegVisionConfig`, `CLIPTextConfig`\r\n- and the rest of vanilla CLIP: `CLIPTokenizer`, `CLIPTokenizerFast`, `CLIPVisionConfig`",
"Hi @ydshieh, raised PR for `SwinModel` #26679",
"I'll be working on these\r\n- `Speech2Text2Config`\r\n- `Speech2Text2Tokenizer`\r\n- `Speech2TextConfig`\r\n- `Speech2TextTokenizer`",
"[DOCSTRING]: `SamConfig`, `SamPromptEncoderConfig`,",
"I'll work on `BertGenerationTokenizer`",
"CanineConfig",
"This is issue is also open for October 2023 🔥 .\r\nIf it is closed, it's because some PR description wrote `Fix #26638`. Ignore it!",
"When I ran `python3 -m pip install -e \".[dev-torch]\" --user` command, I got below error:\r\n\r\n```\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\nanalytics-python 1.4.0 requires backoff==1.10.0, but you have backoff 1.11.1 which is incompatible.\r\nfastai 2.7.12 requires torch<2.1,>=1.7, but you have torch 2.1.0 which is incompatible.\r\nwandb 0.13.3 requires protobuf<4.0dev,>=3.12.0, but you have protobuf 4.24.4 which is incompatible.\r\n```\r\n\r\nAfter this, I ran `make fixup` and got below error:\r\n```\r\nNo library .py files were modified\r\npython utils/custom_init_isort.py\r\npython utils/sort_auto_mappings.py\r\ndoc-builder style src/transformers docs/source --max_len 119 --path_to_docs docs/source\r\nmake: doc-builder: No such file or directory\r\nmake: *** [extra_style_checks] Error 1\r\n```\r\n\r\nIs second error caused because of first error?\r\n\r\n@ydshieh, @NielsRogge Could you please help me here. Sorry for asking many questions. I am new to transformers library and open source in general.",
"Hei! I will work on CodeGen entries: `CodeGenConfig`, `CodeGenTokenizer`, `CodeGenTokenizerFast`."
] | 1,696 | 1,698 | 1,697 | COLLABORATOR | null | **Docstring** is important to understand what inputs a function/method expect and the output format it returns !
This issue is part of the **HACKTOBERFEST** event 🔥 . It is a call for contributions with the goal being to help `transformers` having the required and correct docstring, so users can use it more smoothly.
Adding/Fixing a docstring is a simple (possibly first) contribution to Transformers and most importantly a very important contribution to the Transformers community ❤️ .
If you're interested in making a (maybe first!) contribution, please read through the **Guide to contributing** below. Before starting work on it, please reply in this thread which file you'd like to take :)
[An example of such PR](https://github.com/huggingface/transformers/pull/26636).
### Guide to contributing:
1. Ensure you've read our contributing [guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) 📜
2. Be sure you installed the development dependencies with `pip install -e ".[dev]"`, as described in the contributor guidelines above, to ensure that the code quality tools in `make fixup` can run.
3. Look at the file `utils/check_docstrings.py`:
- find the line `OBJECTS_TO_IGNORE =`
- choose a name in the list `OBJECTS_TO_IGNORE ` (**make sure it is not taken yet by someone in the comments of this issue**)
- **Claim the entry in this thread (confirm no one is working on it)** 🎯
- Let's select **one single** entry for a PR, or at most the entries from the same model architecture (its config/tokenizer/model/processor objects)
4. Remove the selected item (in step 3.) from `OBJECTS_TO_IGNORE`
- commit the changes
5. run `python3 utils/check_docstrings.py --fix_and_overwrite`
- You might see something like:
- <img width="451" alt="Screenshot 2023-10-06 174057" src="https://github.com/huggingface/transformers/assets/2521628/750f241b-05e9-4970-8a20-f58a6d1192db">
- commit the changes
6. fill information where `<fill_type>` or `<fill_docstring>` appear:
- you can usually find the content to fill from other files (by searching the codebase)
- compared to step 5.), the output now looks like:
- <img width="305" alt="Screenshot 2023-10-06 182359" src="https://github.com/huggingface/transformers/assets/2521628/fa6c895e-ed56-468b-834c-490eefa0f954">
- commit the changes
7. run `utils/check_docstrings.py`
- make sure nothing change. Otherwise further work is required.
8. run `make fixup`
9. Open the PR
- with the title having format `[docstring] ... (entry name you work on) ...`
- wait CI to be green
- then tag me @ydshieh
- otherwise, try to fix the failing tests if possible 🙏 . If necessary, ping me for helping on this.
**Looking forward to your contributions 🔥 ❤️ 🚀 !** | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26638/reactions",
"total_count": 12,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 10,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26638/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26637 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26637/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26637/comments | https://api.github.com/repos/huggingface/transformers/issues/26637/events | https://github.com/huggingface/transformers/pull/26637 | 1,930,429,531 | PR_kwDOCUB6oc5cH4bA | 26,637 | Add default template warning | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_26637). All of your documentation changes will be reflected on that endpoint.",
"Hmmm I'm not sure this warning will be useful for the majority of users; wouldn't it be better as an INFO level? \r\n\r\nI think having class-level templates is better than not having anything, and IMO it would make sense to have a sensible default (original architecture training scheme) rather than forcing each checkpoint to have their own definition. Or do you expect every checkpoint to have a different template? ",
"Just from looking at models on the Hub, I think a lot of models use templates totally different from the default class one. For example, if you look at the top trending models on the Hub today, the top model is `Mistral-7B-instruct`, which uses `LlamaTokenizer` with a custom chat template, and the second place model is `Mistral-7B-OpenOrca`, which also uses `LlamaTokenizer` but with a ChatML template. Neither of these templates match the class-level template.\r\n\r\nThe problem right now is if a user uses `apply_chat_template` with a model like that which doesn't have a `chat_template` attribute, it will appear to work, but they'll silently get the wrong, class-level template instead! This is exactly what we want to avoid with chat templates.\r\n\r\nI think raising a warning is the right approach - it won't break existing workflows, but it will flag a potential source of issues, and it might notify users and maintainers to add proper chat templates to the models they're using.",
"Ok, sounds good to me in that case. Thanks for the explanation!",
"Sure! Also @ArthurZucker to respond to your comments, I'm probably going to start with this, then maybe we can begin deprecating `default_class_template` over time, but I want to do it slowly because it breaks old workflows. \r\n\r\nThe `FutureWarning` suggestion was good, though, I'll add that now!",
"`FutureWarning` added!",
"Update: After testing, I took `FutureWarning` out because it clashes with `warning_once`, and I don't want to spam these messages too much!"
] | 1,696 | 1,697 | 1,697 | MEMBER | null | Lots of user repos use common tokenizers like `LlamaTokenizer` even for non-LLaMA models. Unfortunately, these tokenizers come with default chat templates, which may result in user confusion if they use `apply_chat_template`. We want to avoid a scenario where users think they're getting the right template but actually aren't!
This PR raises a short warning the first time a `default_chat_template` is read, which happens when `apply_chat_template` or `ConversationalPipeline` is called for a model without a `chat_template` set. The warning tells the user what's happening, and suggests adding an explicit chat template.
Over time, the goal is to eventually deprecate and remove `default_chat_template`, because we want to avoid using class-level templates entirely, and move to explicit repo level `chat_template` attributes. This PR starts that process, while still retaining the feature for backward compatibility. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26637/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26637/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26637",
"html_url": "https://github.com/huggingface/transformers/pull/26637",
"diff_url": "https://github.com/huggingface/transformers/pull/26637.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26637.patch",
"merged_at": 1697647132000
} |
https://api.github.com/repos/huggingface/transformers/issues/26636 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26636/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26636/comments | https://api.github.com/repos/huggingface/transformers/issues/26636/events | https://github.com/huggingface/transformers/pull/26636 | 1,930,398,476 | PR_kwDOCUB6oc5cHyCH | 26,636 | [docstring] Fix docstring for `AlbertConfig` | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,696 | 1,696 | 1,696 | COLLABORATOR | null | # What does this PR do?
An example demonstrating how to fix docstring. I will write a step-by-step guide on a new issue page. But this PR servers as an final output of such fix. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26636/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26636/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26636",
"html_url": "https://github.com/huggingface/transformers/pull/26636",
"diff_url": "https://github.com/huggingface/transformers/pull/26636.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26636.patch",
"merged_at": 1696606582000
} |
https://api.github.com/repos/huggingface/transformers/issues/26635 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26635/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26635/comments | https://api.github.com/repos/huggingface/transformers/issues/26635/events | https://github.com/huggingface/transformers/issues/26635 | 1,930,242,171 | I_kwDOCUB6oc5zDSh7 | 26,635 | Training does not stop as expected with finite iterable dataset | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/users/qgallouedec/followers",
"following_url": "https://api.github.com/users/qgallouedec/following{/other_user}",
"gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions",
"organizations_url": "https://api.github.com/users/qgallouedec/orgs",
"repos_url": "https://api.github.com/users/qgallouedec/repos",
"events_url": "https://api.github.com/users/qgallouedec/events{/privacy}",
"received_events_url": "https://api.github.com/users/qgallouedec/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "muellerzr",
"id": 7831895,
"node_id": "MDQ6VXNlcjc4MzE4OTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7831895?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/muellerzr",
"html_url": "https://github.com/muellerzr",
"followers_url": "https://api.github.com/users/muellerzr/followers",
"following_url": "https://api.github.com/users/muellerzr/following{/other_user}",
"gists_url": "https://api.github.com/users/muellerzr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/muellerzr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/muellerzr/subscriptions",
"organizations_url": "https://api.github.com/users/muellerzr/orgs",
"repos_url": "https://api.github.com/users/muellerzr/repos",
"events_url": "https://api.github.com/users/muellerzr/events{/privacy}",
"received_events_url": "https://api.github.com/users/muellerzr/received_events",
"type": "User",
"site_admin": false
}
] | [
"This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.\n\nPlease note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.",
"This issue still needs to be addressed ",
"@qgallouedec I'm not sure said dataset is considered finite? Just iterable. So its expected, but not the expectation you have (where it's finite). Take this minimal example:\r\n\r\n```python\r\nimport torch\r\nfrom datasets import Dataset\r\nfrom torch.utils.data import DataLoader\r\n\r\n\r\ndata = torch.tensor([[i, i] for i in range(10)], dtype=torch.float32)\r\ndataset = Dataset.from_dict({\"a\": data}).to_iterable_dataset()\r\n\r\ndl = DataLoader(dataset, batch_size=2)\r\n\r\nfor _ in range(2):\r\n for batch in dl:\r\n print(batch)\r\n```\r\n\r\nYou'll find we iterate through the dataloader twice here, and as such the dataset twice. This would be a question for the Datasets library, not transformers. ",
"> @qgallouedec I'm not sure said dataset is considered finite? Just iterable. \r\n\r\nWhat is a finite iterable dataset then?\r\n\r\n> You'll find we iterate through the dataloader twice here, and as such the dataset twice. \r\n\r\nThat's not precisely true: the iterable is recreated when you do `for batch in dl`. So technically, we create two iterators, and we iterate only once through each of them.\r\n\r\n> This would be a question for the Datasets library, not transformers.\r\n\r\nI would disagree on that. The question is about the role of `max_steps` as a trainer argument. According to the doc:\r\n\r\n> `max_steps` (`int`, _optional_, defaults to -1) — If set to a positive number, the total number of training steps to perform. Overrides num_train_epochs. In case of using a finite iterable dataset the training may stop before reaching the set number of steps when all data is exhausted\r\n\r\nI may be wrong, but `Dataset.from_dict({\"a\": torch.tensor([[i, i] for i in range(10)])}).to_iterable_dataset()` seems to be an finite iterable dataset (since calling `next(it)` enough time raises a `StopIteration`)\r\n\r\nThere are two possibilities:\r\n- Either the `max_step` documentation is unclear, at least to me.\r\n- Or it's a bug, because the training doesn't stop when expected (in the example, after 10 iterations, instead of 20).",
"The docs are incorrect then, as the behavior is to re-iterate until max steps. This is true as well after talking to more core maintainers. Would you like to open a PR? (How long it's been like this, I can't likely say. But my guess is quite a while, at least 3-6 months)"
] | 1,696 | 1,700 | 1,700 | CONTRIBUTOR | null | ### System Info
- `transformers` version: 4.32.1
- Platform: macOS-13.5-arm64-arm-64bit
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate version: 0.23.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.6.0 (cpu)
- Jax version: 0.3.25
- JaxLib version: 0.3.25
- Using GPU in script?: MPS device
- Using distributed or parallel set-up in script?: no
### Who can help?
@muellerzr and @pacman100
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
According to the documentation, training with a finite iterable dataset may stop before reaching the defined max_steps if all data is exhausted (see https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments.max_steps). However, in practice, I observed that training does not stop as expected under these conditions.
```python
import torch
from datasets import Dataset
from torch import nn
from transformers import Trainer, TrainingArguments
class MyModule(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(2, 2)
def forward(self, a, return_loss=True):
output = self.linear(a)
return {"loss": output.sum().abs()}
data = torch.tensor([[i, i] for i in range(10)], dtype=torch.float32) # [[0., 0.], [1., 1.], [2., 2.], ...]
dataset = Dataset.from_dict({"a": data}).to_iterable_dataset()
args = TrainingArguments(output_dir=".", per_device_train_batch_size=1, max_steps=20)
trainer = Trainer(model=MyModule(), args=args, train_dataset=dataset)
trainer.train()
```
### Expected behavior
Based on the documentation, training should stop before `max_steps` if all data in a finite iterable dataset is used (10 steps in the example).
**Actual Behavior**
Learning continues until `max_steps` is reached, looping through the iterable dataset. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26635/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/transformers/issues/26635/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26634 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26634/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26634/comments | https://api.github.com/repos/huggingface/transformers/issues/26634/events | https://github.com/huggingface/transformers/pull/26634 | 1,930,240,116 | PR_kwDOCUB6oc5cHPwO | 26,634 | Better way to run AMD CI with different flavors | {
"login": "ydshieh",
"id": 2521628,
"node_id": "MDQ6VXNlcjI1MjE2Mjg=",
"avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydshieh",
"html_url": "https://github.com/ydshieh",
"followers_url": "https://api.github.com/users/ydshieh/followers",
"following_url": "https://api.github.com/users/ydshieh/following{/other_user}",
"gists_url": "https://api.github.com/users/ydshieh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydshieh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydshieh/subscriptions",
"organizations_url": "https://api.github.com/users/ydshieh/orgs",
"repos_url": "https://api.github.com/users/ydshieh/repos",
"events_url": "https://api.github.com/users/ydshieh/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydshieh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,696 | 1,697 | 1,697 | COLLABORATOR | null | # What does this PR do?
Just showing an approach which duplicate files but in a minimal way. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26634/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26634/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/transformers/pulls/26634",
"html_url": "https://github.com/huggingface/transformers/pull/26634",
"diff_url": "https://github.com/huggingface/transformers/pull/26634.diff",
"patch_url": "https://github.com/huggingface/transformers/pull/26634.patch",
"merged_at": 1697466271000
} |
https://api.github.com/repos/huggingface/transformers/issues/26633 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26633/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26633/comments | https://api.github.com/repos/huggingface/transformers/issues/26633/events | https://github.com/huggingface/transformers/issues/26633 | 1,929,841,416 | I_kwDOCUB6oc5zBwsI | 26,633 | RuntimeError when running batched inference for Salesforce/blip2-opt-2.7b VQA | {
"login": "Keracles",
"id": 103105238,
"node_id": "U_kgDOBiVC1g",
"avatar_url": "https://avatars.githubusercontent.com/u/103105238?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Keracles",
"html_url": "https://github.com/Keracles",
"followers_url": "https://api.github.com/users/Keracles/followers",
"following_url": "https://api.github.com/users/Keracles/following{/other_user}",
"gists_url": "https://api.github.com/users/Keracles/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Keracles/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Keracles/subscriptions",
"organizations_url": "https://api.github.com/users/Keracles/orgs",
"repos_url": "https://api.github.com/users/Keracles/repos",
"events_url": "https://api.github.com/users/Keracles/events{/privacy}",
"received_events_url": "https://api.github.com/users/Keracles/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"#21578 I already looked at this issue, but mine is with vqa.",
"I've fixed the issue it's running fine could you guide me to the folder so that I can pull requests \r\n",
"Hey, there. I don't understand what you want from me. Can you re-phrase or explain me more ?",
"yaa I fixed ur issue but I'm not able to find the repository where I'm required to push this code",
"The repository is transformers : the one i'm writting this issue from",
"Hi,\r\n\r\nYou're preparing 2 images but only one text:\r\n```\r\ninputs = processor(images=[image, image], text=prompt, return_tensors=\"pt\").to(device, torch.float16)\r\n```\r\nhence you need to do:\r\n```\r\ninputs = processor(images=[image, image], text=[prompt, prompt], return_tensors=\"pt\").to(device, torch.float16)\r\n```",
"Thanks for the answer! It doesn't seem logical at first glance because the argument is text without an 's'."
] | 1,696 | 1,696 | 1,696 | NONE | null | ### System Info
- `transformers` version: 4.34.0
- Platform: Linux-4.14.322-246.539.amzn2.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.3
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help?
@gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
# Code To Reproduce
```python
from PIL import Image
import requests
from transformers import Blip2Processor, Blip2ForConditionalGeneration
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b")
model = Blip2ForConditionalGeneration.from_pretrained(
"Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16
)
model.to(device)
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# ---------------------- Change made here ---------------------- #
# Passing `images=[image, image]` instead of `images=image` for testing batched inference
# And adding a text to the input
# inputs = processor(images=[image, image], return_tensors="pt").to(device, torch.float16)
prompt = "Question: What is a dinosaur holding? Answer:"
inputs = processor(images=[image, image], text=prompt, return_tensors="pt").to(device, torch.float16)
# ---------------------------------------------------------------- #
generated_ids = model.generate(**inputs)
```
# Error Stack Trace
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
Cell In[8], line 7
4 inputs = processor(images=[image, image], text=prompt, return_tensors="pt").to(device, torch.float16)
5 # ---------------------------------------------------------------- #
----> 7 generated_ids = model.generate(**inputs)
File /opt/conda/lib/python3.8/site-packages/torch/utils/_contextlib.py:115, in context_decorator.<locals>.decorate_context(*args, **kwargs)
112 @functools.wraps(func)
113 def decorate_context(*args, **kwargs):
114 with ctx_factory():
--> 115 return func(*args, **kwargs)
File /opt/conda/lib/python3.8/site-packages/transformers/models/blip_2/modeling_blip_2.py:1874, in Blip2ForConditionalGeneration.generate(self, pixel_values, input_ids, attention_mask, **generate_kwargs)
1872 if attention_mask is None:
1873 attention_mask = torch.ones_like(input_ids)
-> 1874 attention_mask = torch.cat([language_attention_mask, attention_mask.to(language_attention_mask.device)], dim=1)
1876 # concatenate query embeddings with prompt embeddings
1877 inputs_embeds = self.get_input_embeddings()(input_ids)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 2 but got size 1 for tensor number 1 in the list.
```
### Expected behavior
I'd expect being able to run batched inference for a batch size > 1 with a text prompt for VQA | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26633/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26633/timeline | completed | null | null |
https://api.github.com/repos/huggingface/transformers/issues/26632 | https://api.github.com/repos/huggingface/transformers | https://api.github.com/repos/huggingface/transformers/issues/26632/labels{/name} | https://api.github.com/repos/huggingface/transformers/issues/26632/comments | https://api.github.com/repos/huggingface/transformers/issues/26632/events | https://github.com/huggingface/transformers/issues/26632 | 1,929,838,712 | I_kwDOCUB6oc5zBwB4 | 26,632 | KeyError: 'cardinality' while running Trainer | {
"login": "jinzzasol",
"id": 49014051,
"node_id": "MDQ6VXNlcjQ5MDE0MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/49014051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinzzasol",
"html_url": "https://github.com/jinzzasol",
"followers_url": "https://api.github.com/users/jinzzasol/followers",
"following_url": "https://api.github.com/users/jinzzasol/following{/other_user}",
"gists_url": "https://api.github.com/users/jinzzasol/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jinzzasol/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinzzasol/subscriptions",
"organizations_url": "https://api.github.com/users/jinzzasol/orgs",
"repos_url": "https://api.github.com/users/jinzzasol/repos",
"events_url": "https://api.github.com/users/jinzzasol/events{/privacy}",
"received_events_url": "https://api.github.com/users/jinzzasol/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | [
"Hi, thank you for opening the issue.\r\n\r\nCould you please share your system info with us. You can run the command `transformers-cli env` and copy-paste its output below.",
"@ydshieh Sorry but I'm using Google Colab and I'm not able to run a command in Colab. It is a Pro feature.",
"Just type `!transformers-cli env` no?\r\n\r\nOtherwise share the colab notebook maybe?",
"Ok, I just found out that I should install `transformers` again before running it.\r\n\r\n```\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.34.0\r\n- Platform: Linux-5.15.120+-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.17.3\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: not installed\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.0.1+cu118 (False)\r\n- Tensorflow version (GPU?): 2.13.0 (False)\r\n- Flax version (CPU?/GPU?/TPU?): 0.7.4 (cpu)\r\n- Jax version: 0.4.16\r\n- JaxLib version: 0.4.16\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n\r\nI0000 00:00:1696604507.367394 1255 tfrt_cpu_pjrt_client.cc:352] TfrtCpuClient destroyed.\r\n```",
"Thank you.\r\n\r\n@Rocketknight1 could you take a look here?",
"@ydshieh Just to tell you, I ran the same code on my local machine and I encountered the same issue. \r\nBelow is the OS info.\r\n\r\n```\r\nCopy-and-paste the text below in your GitHub issue and FILL OUT the two last points.\r\n\r\n- `transformers` version: 4.33.3\r\n- Platform: Linux-5.15.90.2-microsoft-standard-WSL2-x86_64-with-glibc2.35\r\n- Python version: 3.10.12\r\n- Huggingface_hub version: 0.17.3\r\n- Safetensors version: 0.3.3\r\n- Accelerate version: not installed\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): not installed (NA)\r\n- Tensorflow version (GPU?): 2.14.0 (True)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\r\n```",
"Woah, this is a blast from the past! `TFTrainer` is very old and completely deprecated now, and we don't support it anymore. We generally advise people to just use the Keras API for TF.\r\n\r\nYou can keep most of your code the same up to the `model.compile()` line, and then on the next line I'd just do something like this:\r\n\r\n```python\r\nmodel.fit(train_tokenized, y_train, validation_data=(val_tokenized, y_val), epochs=3)\r\n```\r\n\r\nFor more info on training Hugging Face models with TF, please see our [TensorFlow Philosophy](https://huggingface.co/blog/tensorflow-philosophy) post, or any of the Keras documentation, particularly the docs on supported dataset types and `model.fit()` - you can find them [here](https://keras.io/getting_started/intro_to_keras_for_engineers/#training-models-with-fit).",
"I forgot it's TFTrainer. Sorry @Rocketknight1 !",
"No problem, it was a nice nostalgia moment!",
"@Rocketknight1 @ydshieh Thank you all. I was referring to one post I found and did not know this was deprecated. ",
"```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n[<ipython-input-32-021f97abfd29>](https://localhost:8080/#) in <cell line: 1>()\r\n----> 1 model.fit(train_tokenized, y_train, validation_data=(val_tokenized, y_val), epochs=3)\r\n\r\n3 frames\r\n[/usr/local/lib/python3.10/dist-packages/tensorflow/core/function/polymorphism/function_type.py](https://localhost:8080/#) in __hash__(self)\r\n 144 \r\n 145 def __hash__(self):\r\n--> 146 return hash((self.name, self.kind, self.optional, self.type_constraint))\r\n 147 \r\n 148 def __repr__(self):\r\n\r\nValueError: Cannot generate a hashable key for IteratorSpec(({'input_ids': TensorSpec(shape=(None, 128), dtype=tf.int32, name=None), 'attention_mask': TensorSpec(shape=(None, 128), dtype=tf.int32, name=None)}, TensorSpec(shape=(None,), dtype=tf.int64, name=None)),) because the _serialize() method returned an unsupproted value of type <class 'transformers.tokenization_utils_base.BatchEncoding'>\r\n```\r\nI ran the `model.fit()` and got this error message.\r\nI think this is a different topic but any ideas?",
"Yeah, that's a regular problem we have! Just do `train_tokenized = dict(train_tokenized)` before passing the data to `model.fit()` - the output data is a `BatchEncoding` that Keras doesn't quite understand.\r\n\r\nOne day I'll figure out a cleaner solution for it, but I'll probably have to slip a couple of shims into Keras's methods!",
"Thank you!"
] | 1,696 | 1,696 | 1,696 | NONE | null | ### System Info
Google Colab
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
I encountered this issue while I was running the model. The dataset is an IMDB movie review on Kaggle.
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in __getattr__(self, item)
265 try:
--> 266 return self.data[item]
267 except KeyError:
KeyError: 'cardinality'
During handling of the above exception, another exception occurred:
```
Below is the code:
**Data Cleaning**
```
imdb = pd.read_csv("IMDB Dataset.csv")
df_imdb = imdb.copy()
df_imdb = df_imdb.replace({'sentiment': {'positive': 1, 'negative': 0}})
df_imdb.drop_duplicates(keep='first', inplace=True)
from sklearn.model_selection import train_test_split
train, test = train_test_split(df_cleaned, test_size=0.4, shuffle=False)
val, test = train_test_split(test, test_size=0.5, shuffle=True)
train = train.reset_index(drop=True)
val = val.reset_index(drop=True)
test = test.reset_index(drop=True)
train.shape, val.shape, test.shape,
```
**Preprocessing**
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased")
# Create new index
train_idx = [i for i in range(len(train.index))]
test_idx = [i for i in range(len(test.index))]
val_idx = [i for i in range(len(val.index))]
# Convert to numpy
x_train = train['review'].values[train_idx]
x_test = test['review'].values[test_idx]
x_val = val['review'].values[val_idx]
y_train = train['sentiment'].values[train_idx]
y_test = test['sentiment'].values[test_idx]
y_val = val['sentiment'].values[val_idx]
# Tokenize datasets
train_tokenized = tokenizer(list(x_train), return_tensors='tf', truncation=True, padding=True, max_length=128)
val_tokenized = tokenizer(list(x_val), return_tensors='tf', truncation=True, padding=True, max_length=128)
test_tokenized = tokenizer(list(x_test), return_tensors='tf', truncation=True, padding=True, max_length=128)
```
```
from transformers import TFAutoModelForSequenceClassification
model = TFAutoModelForSequenceClassification.from_pretrained(
"distilbert-base-uncased")
model.compile(optimizer=optimizer) # No loss argument!
from transformers import TFTrainer, TFTrainingArguments
training_args = TFTrainingArguments(
output_dir="./sentiment_model",
per_device_train_batch_size=32,
per_device_eval_batch_size=32,
num_train_epochs=3,
evaluation_strategy="steps",
eval_steps=500, # Adjust as needed
save_total_limit=2,
)
trainer = TFTrainer(
model=model,
args=training_args,
train_dataset=train_tokenized,
eval_dataset=val_tokenized,
)
trainer.train() # **<- where the error occured**
```
### Expected behavior
trainer.train() should run. | {
"url": "https://api.github.com/repos/huggingface/transformers/issues/26632/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/transformers/issues/26632/timeline | completed | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.