Instructions to use SparseCL/BGE-SparseCL-msmarco with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SparseCL/BGE-SparseCL-msmarco with Transformers:
# Load model directly from transformers import AutoTokenizer, our_BertForCL tokenizer = AutoTokenizer.from_pretrained("SparseCL/BGE-SparseCL-msmarco") model = our_BertForCL.from_pretrained("SparseCL/BGE-SparseCL-msmarco") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- e6ea62f9638b254e82b5f1b4fb5350eb0223887b2524e02965f7e984ad098e82
- Size of remote file:
- 4.09 kB
- SHA256:
- 1d5cd02d69da61456bbb3466b9b241932382e73d45cf3f527b4681918f8e3883
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.