OSError: ./gemma-3-1b-it does not appear to have a file named preprocessor_config.json.
#21
by
zjnyly
- opened
Hi, it seems that the ·preprocessor_config.json· file is missing. I’ve never seen this file before. I encountered this problem when I tried to use llm-compressor to quantize the model.
Traceback (most recent call last):
File "/home/zjnyly/LLMs/llm-compressor.py", line 87, in <module>
oneshot(
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/compressed_tensors/utils/helpers.py", line 190, in wrapped
return func(*args, **kwargs)
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/llmcompressor/transformers/finetune/text_generation.py", line 33, in oneshot
oneshot(**kwargs)
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/llmcompressor/entrypoints/oneshot.py", line 178, in oneshot
one_shot = Oneshot(**kwargs)
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/llmcompressor/entrypoints/oneshot.py", line 110, in __init__
pre_process(model_args)
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/llmcompressor/entrypoints/utils.py", line 58, in pre_process
model_args.processor = initialize_processor_from_path(
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/llmcompressor/entrypoints/utils.py", line 240, in initialize_processor_from_path
processor = AutoProcessor.from_pretrained(
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/transformers/models/auto/processing_auto.py", line 347, in from_pretrained
return processor_class.from_pretrained(
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/transformers/processing_utils.py", line 1079, in from_pretrained
args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/transformers/processing_utils.py", line 1143, in _get_arguments_from_pretrained
args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/transformers/models/auto/image_processing_auto.py", line 467, in from_pretrained
raise initial_exception
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/transformers/models/auto/image_processing_auto.py", line 449, in from_pretrained
config_dict, _ = ImageProcessingMixin.get_image_processor_dict(
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/transformers/image_processing_base.py", line 340, in get_image_processor_dict
resolved_image_processor_file = cached_file(
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/transformers/utils/hub.py", line 266, in cached_file
file = cached_files(path_or_repo_id=path_or_repo_id, filenames=[filename], **kwargs)
File "/home/zjnyly/miniconda3/envs/py310_new/lib/python3.10/site-packages/transformers/utils/hub.py", line 381, in cached_files
raise OSError(
OSError: ./gemma-3-1b-it does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/./gemma-3-1b-it/tree/main' for available files.
Hi @zjnyly ,
I have reproduced the issue in colab, the above error occurred due to the quantization process is trying to access the preprocessor_config.json file to get the tokenizer key while calling the oneshot
function from llm-compressor. However the google/gemma-3-1b-it model doesn't contain any such config file. You can pass the tokenizer parameter to the oneshot
function while doing the quantization process.
Please find the following gist file for your reference.
Thanks.