text
stringlengths 0
696
|
---|
) from e
|
OSError: You are trying to access a gated repo.
|
Make sure to have access to it at https://huggingface.co/arcee-ai/AFM-4.5B.
|
403 Client Error. (Request ID: Root=1-6889b14c-48562b630e97e6b6468ad43c;9290a64d-9123-47c2-8ebd-1707b6e806ba)
|
Cannot access gated repo for url https://huggingface.co/arcee-ai/AFM-4.5B/resolve/main/config.json.
|
Access to model arcee-ai/AFM-4.5B is restricted and you are not in the authorized list. Visit https://huggingface.co/arcee-ai/AFM-4.5B to ask for access.
|
No suitable GPU found for deepcogito/cogito-v2-preview-deepseek-671B-MoE | 1624.83 GB VRAM requirement
|
No suitable GPU found for deepcogito/cogito-v2-preview-deepseek-671B-MoE | 1624.83 GB VRAM requirement
|
No suitable GPU found for deepseek-ai/DeepSeek-R1-0528 | 1657.55 GB VRAM requirement
|
No suitable GPU found for deepseek-ai/DeepSeek-R1-0528 | 1657.55 GB VRAM requirement
|
No suitable GPU found for deepseek-ai/DeepSeek-R1 | 1657.55 GB VRAM requirement
|
No suitable GPU found for deepseek-ai/DeepSeek-R1 | 1657.55 GB VRAM requirement
|
No suitable GPU found for deepseek-ai/DeepSeek-V3 | 1657.55 GB VRAM requirement
|
No suitable GPU found for deepseek-ai/DeepSeek-V3 | 1657.55 GB VRAM requirement
|
Everything was good in google_gemma-3-1b-it_0.txt
|
Everything was good in google_gemma-3n-E2B-it_0.txt
|
Everything was good in google_gemma-3n-E4B-it_0.txt
|
Traceback (most recent call last):
|
File "/tmp/.cache/uv/environments-v2/b21213fbd95837b8/lib/python3.13/site-packages/huggingface_hub/_login.py", line 340, in notebook_login
|
import ipywidgets.widgets as widgets # type: ignore
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
ModuleNotFoundError: No module named 'ipywidgets'
|
During handling of the above exception, another exception occurred:
|
Traceback (most recent call last):
|
File "/tmp/google_gemma-3n-E4B-it_1OrzVWg.py", line 12, in <module>
|
notebook_login()
|
~~~~~~~~~~~~~~^^
|
File "/tmp/.cache/uv/environments-v2/b21213fbd95837b8/lib/python3.13/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f
|
return f(*args, **kwargs)
|
File "/tmp/.cache/uv/environments-v2/b21213fbd95837b8/lib/python3.13/site-packages/huggingface_hub/utils/_deprecation.py", line 31, in inner_f
|
return f(*args, **kwargs)
|
File "/tmp/.cache/uv/environments-v2/b21213fbd95837b8/lib/python3.13/site-packages/huggingface_hub/_login.py", line 343, in notebook_login
|
raise ImportError(
|
...<2 lines>...
|
)
|
ImportError: The `notebook_login` function can only be used in a notebook (Jupyter or Colab) and you need the `ipywidgets` module: `pip install ipywidgets`.
|
Traceback (most recent call last):
|
File "/tmp/inclusionAI_Ling-Coder-lite-base_0cc5zzb.py", line 13, in <module>
|
pipe = pipeline("text-generation", model="inclusionAI/Ling-Coder-lite-base", trust_remote_code=True)
|
File "/tmp/.cache/uv/environments-v2/c78880f395eda5ac/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1210, in pipeline
|
return pipeline_class(model=model, framework=framework, task=task, **kwargs)
|
File "/tmp/.cache/uv/environments-v2/c78880f395eda5ac/lib/python3.13/site-packages/transformers/pipelines/text_generation.py", line 121, in __init__
|
super().__init__(*args, **kwargs)
|
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
|
File "/tmp/.cache/uv/environments-v2/c78880f395eda5ac/lib/python3.13/site-packages/transformers/pipelines/base.py", line 1043, in __init__
|
self.model.to(self.device)
|
~~~~~~~~~~~~~^^^^^^^^^^^^^
|
File "/tmp/.cache/uv/environments-v2/c78880f395eda5ac/lib/python3.13/site-packages/transformers/modeling_utils.py", line 4341, in to
|
return super().to(*args, **kwargs)
|
~~~~~~~~~~^^^^^^^^^^^^^^^^^
|
File "/tmp/.cache/uv/environments-v2/c78880f395eda5ac/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1369, in to
|
return self._apply(convert)
|
~~~~~~~~~~~^^^^^^^^^
|
File "/tmp/.cache/uv/environments-v2/c78880f395eda5ac/lib/python3.13/site-packages/torch/nn/modules/module.py", line 928, in _apply
|
module._apply(fn)
|
~~~~~~~~~~~~~^^^^
|
File "/tmp/.cache/uv/environments-v2/c78880f395eda5ac/lib/python3.13/site-packages/torch/nn/modules/module.py", line 928, in _apply
|
module._apply(fn)
|
~~~~~~~~~~~~~^^^^
|
File "/tmp/.cache/uv/environments-v2/c78880f395eda5ac/lib/python3.13/site-packages/torch/nn/modules/module.py", line 928, in _apply
|
module._apply(fn)
|
~~~~~~~~~~~~~^^^^
|
[Previous line repeated 4 more times]
|
File "/tmp/.cache/uv/environments-v2/c78880f395eda5ac/lib/python3.13/site-packages/torch/nn/modules/module.py", line 955, in _apply
|
param_applied = fn(param)
|
File "/tmp/.cache/uv/environments-v2/c78880f395eda5ac/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1355, in convert
|
return t.to(
|
~~~~^
|
device,
|
^^^^^^^
|
dtype if t.is_floating_point() or t.is_complex() else None,
|
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
non_blocking,
|
^^^^^^^^^^^^^
|
)
|
^
|
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacity of 22.30 GiB of which 704.00 KiB is free. Process 25872 has 22.29 GiB memory in use. Of the allocated memory 18.44 GiB is allocated by PyTorch, and 3.61 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
|
Everything was good in inclusionAI_Ling-Coder-lite-base_1.txt
|
Traceback (most recent call last):
|
File "/tmp/inclusionAI_Ling-Coder-lite_00C9BJY.py", line 13, in <module>
|
pipe = pipeline("text-generation", model="inclusionAI/Ling-Coder-lite", trust_remote_code=True)
|
File "/tmp/.cache/uv/environments-v2/81bd7a78e7a9c12d/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 1210, in pipeline
|
return pipeline_class(model=model, framework=framework, task=task, **kwargs)
|
File "/tmp/.cache/uv/environments-v2/81bd7a78e7a9c12d/lib/python3.13/site-packages/transformers/pipelines/text_generation.py", line 121, in __init__
|
super().__init__(*args, **kwargs)
|
~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
|
File "/tmp/.cache/uv/environments-v2/81bd7a78e7a9c12d/lib/python3.13/site-packages/transformers/pipelines/base.py", line 1043, in __init__
|
self.model.to(self.device)
|
~~~~~~~~~~~~~^^^^^^^^^^^^^
|
File "/tmp/.cache/uv/environments-v2/81bd7a78e7a9c12d/lib/python3.13/site-packages/transformers/modeling_utils.py", line 4341, in to
|
return super().to(*args, **kwargs)
|
~~~~~~~~~~^^^^^^^^^^^^^^^^^
|
File "/tmp/.cache/uv/environments-v2/81bd7a78e7a9c12d/lib/python3.13/site-packages/torch/nn/modules/module.py", line 1369, in to
|
return self._apply(convert)
|
~~~~~~~~~~~^^^^^^^^^
|
File "/tmp/.cache/uv/environments-v2/81bd7a78e7a9c12d/lib/python3.13/site-packages/torch/nn/modules/module.py", line 928, in _apply
|
module._apply(fn)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.