Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError Message: The split names could not be parsed from the dataset config. Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 299, in get_dataset_config_info for split_generator in builder._split_generators( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 83, in _split_generators raise ValueError( ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response for split in get_dataset_split_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 353, in get_dataset_split_names info = get_dataset_config_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 304, in get_dataset_config_info raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
x86 SEMamba Docker Image
This Docker image provides a pre-configured development environment for running SEMamba models on x86 systems such as NVIDIA A100, RTX 4090, and other CUDA-compatible GPUs. It contains Python 3.12 and PyTorch 2.2.2, built on top of Ubuntu 22.04 with CUDA 12.4.
Contents
- OS: Ubuntu 22.04 (x86_64)
- Python: 3.12 (via Miniconda)
- CUDA: 12.4 (base image)
- PyTorch: 2.2.2
- TorchVision: 0.17.2
- TorchAudio: 2.2.2
- Mamba-SSM: 1.2.0
- Essential packages: git, vim, screen, htop, tmux, openssh, etc.
Usage
Download Docker Image
wget https://huggingface.co/datasets/rc19477/x86-semamba-docker/resolve/main/x86_semamba_py312_pt222_cuda124.tar
Load Docker Image
docker load < x86_semamba_py312_pt222_cuda124.tar
Run Container
docker run --gpus all -it -v $(pwd):/workspace x86_semamba_py312_pt222_cuda124
This will mount your current directory into /workspace
inside the container.
Purpose
- Simplifies setup for SEMamba on x86 GPU systems
- Provides reproducible environment with version-pinned core libraries
License & Attribution
- This Docker image is shared for non-commercial research purposes.
- All included libraries retain their original licenses.
- Based on PyTorch, Miniconda, and Mamba.
Maintainer
For questions or issues, feel free to open a discussion or connect via GitHub.
- Downloads last month
- 26