SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a sentence-transformers model finetuned from sentence-transformers/all-MiniLM-L6-v2. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: sentence-transformers/all-MiniLM-L6-v2
- Maximum Sequence Length: 256 tokens
- Output Dimensionality: 384 dimensions
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("sentence_transformers_model_id")
# Run inference
sentences = [
'What type of electrons are electrons that are not confined to the bond between two atoms?',
"the human capacity for working together and with tools builds on cognitive abilities that, while not unique to humans, are most developed in humans both in scale and plasticity. our capacity to engage with collaborators and with technology requires a continuous expenditure of attentive work that we show may be understood in terms of what is heuristically argued as ` trust ' in socio - economic fields. by adopting a ` social physics ' of information approach, we are able to bring dimensional analysis to bear on an anthropological - economic issue. the cognitive - economic trade - off between group size and rate of attention to detail is the connection between these. this allows humans to scale cooperative effort across groups, from teams to communities, with a trade - off between group size and attention. we show here that an accurate concept of trust follows a bipartite ` economy of work ' model, and that this leads to correct predictions about the statistical distribution of group sizes in society. trust is essentially a cognitive - economic issue that depends on the memory cost of past behaviour and on the frequency of attentive policing of intent. all this leads to the characteristic ` fractal ' structure for human communities. the balance between attraction to some alpha attractor and dispersion due to conflict fully explains data from all relevant sources. the implications of our method suggest a broad applicability beyond purely social groupings to general resource constrained interactions, e. g. in work, technology, cybernetics, and generalized socio - economic systems of all kinds.",
'we consider a long - term optimal investment problem where an investor tries to minimize the probability of falling below a target growth rate. from a mathematical viewpoint, this is a large deviation control problem. this problem will be shown to relate to a risk - sensitive stochastic control problem for a sufficiently large time horizon. indeed, in our theorem we state a duality in the relation between the above two problems. furthermore, under a multidimensional linear gaussian model we obtain explicit solutions for the primal problem.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Information Retrieval
- Dataset:
sciq-eval
- Evaluated with
InformationRetrievalEvaluator
Metric | Value |
---|---|
cosine_accuracy@1 | 0.647 |
cosine_accuracy@3 | 0.751 |
cosine_accuracy@5 | 0.786 |
cosine_accuracy@10 | 0.827 |
cosine_precision@1 | 0.647 |
cosine_precision@3 | 0.2503 |
cosine_precision@5 | 0.1572 |
cosine_precision@10 | 0.0827 |
cosine_recall@1 | 0.647 |
cosine_recall@3 | 0.751 |
cosine_recall@5 | 0.786 |
cosine_recall@10 | 0.827 |
cosine_ndcg@10 | 0.7352 |
cosine_mrr@10 | 0.7059 |
cosine_map@100 | 0.7087 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 46,716 training samples
- Columns:
sentence_0
,sentence_1
, andlabel
- Approximate statistics based on the first 1000 samples:
sentence_0 sentence_1 label type string string float details - min: 5 tokens
- mean: 18.07 tokens
- max: 75 tokens
- min: 2 tokens
- mean: 175.71 tokens
- max: 256 tokens
- min: 0.0
- mean: 0.24
- max: 1.0
- Samples:
sentence_0 sentence_1 label What occurs when a former inhabited area gets disturbed?
recent approaches to improving the extraction of text embeddings from autoregressive large language models ( llms ) have largely focused on improvements to data, backbone pretrained language models, or improving task - differentiation via instructions. in this work, we address an architectural limitation of autoregressive models : token embeddings cannot contain information from tokens that appear later in the input. to address this limitation, we propose a simple approach, " echo embeddings, " in which we repeat the input twice in context and extract embeddings from the second occurrence. we show that echo embeddings of early tokens can encode information about later tokens, allowing us to maximally leverage high - quality llms for embeddings. on the mteb leaderboard, echo embeddings improve over classical embeddings by over 9 % zero - shot and by around 0. 7 % when fine - tuned. echo embeddings with a mistral - 7b model achieve state - of - the - art compared to prior open source mod...
0.0
Veins subdivide repeatedly and branch throughout what?
the notion of generalization has moved away from the classical one defined in statistical learning theory towards an emphasis on out - of - domain generalization ( oodg ). recently, there is a growing focus on inductive generalization, where a progression of difficulty implicitly governs the direction of domain shifts. in inductive generalization, it is often assumed that the training data lie in the easier side, while the testing data lie in the harder side. the challenge is that training data are always finite, but a learner is expected to infer an inductive principle that could be applied in an unbounded manner. this emerging regime has appeared in the literature under different names, such as length / logical / algorithmic extrapolation, but a formal definition is lacking. this work provides such a formalization that centers on the concept of model successors. then we outline directions to adapt well - established techniques towards the learning of model successors. this work calls...
0.0
What is the term for physicians and scientists who research and develop vaccines and treat and study conditions ranging from allergies to aids?
we generalize the hierarchy construction to generic 2 + 1d topological orders ( which can be non - abelian ) by condensing abelian anyons in one topological order to construct a new one. we show that such construction is reversible and leads to a new equivalence relation between topological orders. we refer to the corresponding equivalent class ( the orbit of the hierarchy construction ) as " the non - abelian family ". each non - abelian family has one or a few root topological orders with the smallest number of anyon types. all the abelian topological orders belong to the trivial non - abelian family whose root is the trivial topological order. we show that abelian anyons in root topological orders must be bosons or fermions with trivial mutual statistics between them. the classification of topological orders is then greatly simplified, by focusing on the roots of each family : those roots are given by non - abelian modular extensions of representation categories of abelian groups.
0.0
- Loss:
MultipleNegativesRankingLoss
with these parameters:{ "scale": 20.0, "similarity_fct": "cos_sim" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 32per_device_eval_batch_size
: 32num_train_epochs
: 1multi_dataset_batch_sampler
: round_robin
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 32per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonetorch_empty_cache_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.0warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Falseignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}tp_size
: 0fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Nonehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseinclude_for_metrics
: []eval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falseeval_on_start
: Falseuse_liger_kernel
: Falseeval_use_gather_object
: Falseaverage_tokens_across_devices
: Falseprompts
: Nonebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: round_robin
Training Logs
Epoch | Step | Training Loss | sciq-eval_cosine_ndcg@10 |
---|---|---|---|
0.0685 | 100 | - | 0.6007 |
0.1370 | 200 | - | 0.7026 |
0.2055 | 300 | - | 0.7167 |
0.2740 | 400 | - | 0.7195 |
0.3425 | 500 | 2.8082 | 0.7150 |
0.4110 | 600 | - | 0.7292 |
0.4795 | 700 | - | 0.7356 |
0.5479 | 800 | - | 0.7428 |
0.6164 | 900 | - | 0.7399 |
0.6849 | 1000 | 2.6228 | 0.7339 |
0.7534 | 1100 | - | 0.7356 |
0.8219 | 1200 | - | 0.7375 |
0.8904 | 1300 | - | 0.7385 |
0.9589 | 1400 | - | 0.7351 |
1.0 | 1460 | - | 0.7352 |
Framework Versions
- Python: 3.12.8
- Sentence Transformers: 3.4.1
- Transformers: 4.51.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.3.0
- Datasets: 3.2.0
- Tokenizers: 0.21.0
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
MultipleNegativesRankingLoss
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for NilsML/MNLP_M3_document_encoder
Base model
sentence-transformers/all-MiniLM-L6-v2Evaluation results
- Cosine Accuracy@1 on sciq evalself-reported0.647
- Cosine Accuracy@3 on sciq evalself-reported0.751
- Cosine Accuracy@5 on sciq evalself-reported0.786
- Cosine Accuracy@10 on sciq evalself-reported0.827
- Cosine Precision@1 on sciq evalself-reported0.647
- Cosine Precision@3 on sciq evalself-reported0.250
- Cosine Precision@5 on sciq evalself-reported0.157
- Cosine Precision@10 on sciq evalself-reported0.083
- Cosine Recall@1 on sciq evalself-reported0.647
- Cosine Recall@3 on sciq evalself-reported0.751