SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2

This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: sentence-transformers/all-MiniLM-L6-v2
  • Maximum Sequence Length: 256 tokens
  • Output Dimensionality: 384 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
    'What type of electrons are electrons that are not confined to the bond between two atoms?',
    "the human capacity for working together and with tools builds on cognitive abilities that, while not unique to humans, are most developed in humans both in scale and plasticity. our capacity to engage with collaborators and with technology requires a continuous expenditure of attentive work that we show may be understood in terms of what is heuristically argued as ` trust ' in socio - economic fields. by adopting a ` social physics ' of information approach, we are able to bring dimensional analysis to bear on an anthropological - economic issue. the cognitive - economic trade - off between group size and rate of attention to detail is the connection between these. this allows humans to scale cooperative effort across groups, from teams to communities, with a trade - off between group size and attention. we show here that an accurate concept of trust follows a bipartite ` economy of work ' model, and that this leads to correct predictions about the statistical distribution of group sizes in society. trust is essentially a cognitive - economic issue that depends on the memory cost of past behaviour and on the frequency of attentive policing of intent. all this leads to the characteristic ` fractal ' structure for human communities. the balance between attraction to some alpha attractor and dispersion due to conflict fully explains data from all relevant sources. the implications of our method suggest a broad applicability beyond purely social groupings to general resource constrained interactions, e. g. in work, technology, cybernetics, and generalized socio - economic systems of all kinds.",
    'we consider a long - term optimal investment problem where an investor tries to minimize the probability of falling below a target growth rate. from a mathematical viewpoint, this is a large deviation control problem. this problem will be shown to relate to a risk - sensitive stochastic control problem for a sufficiently large time horizon. indeed, in our theorem we state a duality in the relation between the above two problems. furthermore, under a multidimensional linear gaussian model we obtain explicit solutions for the primal problem.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.647
cosine_accuracy@3 0.751
cosine_accuracy@5 0.786
cosine_accuracy@10 0.827
cosine_precision@1 0.647
cosine_precision@3 0.2503
cosine_precision@5 0.1572
cosine_precision@10 0.0827
cosine_recall@1 0.647
cosine_recall@3 0.751
cosine_recall@5 0.786
cosine_recall@10 0.827
cosine_ndcg@10 0.7352
cosine_mrr@10 0.7059
cosine_map@100 0.7087

Training Details

Training Dataset

Unnamed Dataset

  • Size: 46,716 training samples
  • Columns: sentence_0, sentence_1, and label
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1 label
    type string string float
    details
    • min: 5 tokens
    • mean: 18.07 tokens
    • max: 75 tokens
    • min: 2 tokens
    • mean: 175.71 tokens
    • max: 256 tokens
    • min: 0.0
    • mean: 0.24
    • max: 1.0
  • Samples:
    sentence_0 sentence_1 label
    What occurs when a former inhabited area gets disturbed? recent approaches to improving the extraction of text embeddings from autoregressive large language models ( llms ) have largely focused on improvements to data, backbone pretrained language models, or improving task - differentiation via instructions. in this work, we address an architectural limitation of autoregressive models : token embeddings cannot contain information from tokens that appear later in the input. to address this limitation, we propose a simple approach, " echo embeddings, " in which we repeat the input twice in context and extract embeddings from the second occurrence. we show that echo embeddings of early tokens can encode information about later tokens, allowing us to maximally leverage high - quality llms for embeddings. on the mteb leaderboard, echo embeddings improve over classical embeddings by over 9 % zero - shot and by around 0. 7 % when fine - tuned. echo embeddings with a mistral - 7b model achieve state - of - the - art compared to prior open source mod... 0.0
    Veins subdivide repeatedly and branch throughout what? the notion of generalization has moved away from the classical one defined in statistical learning theory towards an emphasis on out - of - domain generalization ( oodg ). recently, there is a growing focus on inductive generalization, where a progression of difficulty implicitly governs the direction of domain shifts. in inductive generalization, it is often assumed that the training data lie in the easier side, while the testing data lie in the harder side. the challenge is that training data are always finite, but a learner is expected to infer an inductive principle that could be applied in an unbounded manner. this emerging regime has appeared in the literature under different names, such as length / logical / algorithmic extrapolation, but a formal definition is lacking. this work provides such a formalization that centers on the concept of model successors. then we outline directions to adapt well - established techniques towards the learning of model successors. this work calls... 0.0
    What is the term for physicians and scientists who research and develop vaccines and treat and study conditions ranging from allergies to aids? we generalize the hierarchy construction to generic 2 + 1d topological orders ( which can be non - abelian ) by condensing abelian anyons in one topological order to construct a new one. we show that such construction is reversible and leads to a new equivalence relation between topological orders. we refer to the corresponding equivalent class ( the orbit of the hierarchy construction ) as " the non - abelian family ". each non - abelian family has one or a few root topological orders with the smallest number of anyon types. all the abelian topological orders belong to the trivial non - abelian family whose root is the trivial topological order. we show that abelian anyons in root topological orders must be bosons or fermions with trivial mutual statistics between them. the classification of topological orders is then greatly simplified, by focusing on the roots of each family : those roots are given by non - abelian modular extensions of representation categories of abelian groups. 0.0
  • Loss: MultipleNegativesRankingLoss with these parameters:
    {
        "scale": 20.0,
        "similarity_fct": "cos_sim"
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • num_train_epochs: 1
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 32
  • per_device_eval_batch_size: 32
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 1
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • tp_size: 0
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss sciq-eval_cosine_ndcg@10
0.0685 100 - 0.6007
0.1370 200 - 0.7026
0.2055 300 - 0.7167
0.2740 400 - 0.7195
0.3425 500 2.8082 0.7150
0.4110 600 - 0.7292
0.4795 700 - 0.7356
0.5479 800 - 0.7428
0.6164 900 - 0.7399
0.6849 1000 2.6228 0.7339
0.7534 1100 - 0.7356
0.8219 1200 - 0.7375
0.8904 1300 - 0.7385
0.9589 1400 - 0.7351
1.0 1460 - 0.7352

Framework Versions

  • Python: 3.12.8
  • Sentence Transformers: 3.4.1
  • Transformers: 4.51.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.2.0
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}
Downloads last month
5
Safetensors
Model size
22.7M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for NilsML/MNLP_M3_document_encoder

Finetuned
(489)
this model

Evaluation results