Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup
Paper
•
2101.06983
•
Published
•
1
This is a sentence-transformers model finetuned from nomic-ai/modernbert-embed-base on the gooaq dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
SentenceTransformer(
(0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: ModernBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("Areeb-02/modernbert-embed-base-gooaq-8e-05")
# Run inference
sentences = [
'is bmw 225xe 4 wheel drive?',
'The BMW 225xe offers both a higher system output and more boot capacity than its competitors. With its plug-in hybrid drive system, the BMW 225xe combines BMW EfficientDynamics with comfort, driving pleasure and all-wheel drive, and brings versatility and generous levels of space together in a compact vehicle.',
'Tap Settings > [your name] > Password & Security. Tap Change Password. Enter your current password or device passcode, then enter a new password and confirm the new password. Tap Change or Change Password.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
question and answer| question | answer | |
|---|---|---|
| type | string | string |
| details |
|
|
| question | answer |
|---|---|
what is the difference between clay and mud mask? |
The main difference between the two is that mud is a skin-healing agent, while clay is a cosmetic, drying agent. Clay masks are most useful for someone who has oily skin and is prone to breakouts of acne and blemishes. |
myki how much on card? |
A full fare myki card costs $6 and a concession, seniors or child myki costs $3. For more information about how to use your myki, visit ptv.vic.gov.au or call 1800 800 007. |
how to find out if someone blocked your phone number on iphone? |
If you get a notification like "Message Not Delivered" or you get no notification at all, that's a sign of a potential block. Next, you could try calling the person. If the call goes right to voicemail or rings once (or a half ring) then goes to voicemail, that's further evidence you may have been blocked. |
CachedMultipleNegativesRankingLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
question and answer| question | answer | |
|---|---|---|
| type | string | string |
| details |
|
|
| question | answer |
|---|---|
what are the four most common types of shopping centers quizlet? |
['the neighborhood center.', 'the community center.', 'the regional center.', 'the super-regional center.'] |
how far back does a enhanced dbs check go? |
The filtering periods for cautions are two years for under 18s and six years for those aged 18 and over. The filtering periods for convictions are 5.5 years for under 18s and 11 years for those aged 18 and over. |
can ezpass be used in colorado? |
ExpressToll transponder, switchable transponder, EZPass. ... ExpressToll passes only work in the State of Colorado. Travelers from out-of-state can use the express lanes and are billed via License Plate Toll. Formerly there was a separate tolling system for users of E470 called EZPass. |
CachedMultipleNegativesRankingLoss with these parameters:{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
eval_strategy: stepsper_device_train_batch_size: 30per_device_eval_batch_size: 30learning_rate: 8e-05num_train_epochs: 2warmup_ratio: 0.05fp16: Truebatch_sampler: no_duplicatesoverwrite_output_dir: Falsedo_predict: Falseeval_strategy: stepsprediction_loss_only: Trueper_device_train_batch_size: 30per_device_eval_batch_size: 30per_gpu_train_batch_size: Noneper_gpu_eval_batch_size: Nonegradient_accumulation_steps: 1eval_accumulation_steps: Nonetorch_empty_cache_steps: Nonelearning_rate: 8e-05weight_decay: 0.0adam_beta1: 0.9adam_beta2: 0.999adam_epsilon: 1e-08max_grad_norm: 1.0num_train_epochs: 2max_steps: -1lr_scheduler_type: linearlr_scheduler_kwargs: {}warmup_ratio: 0.05warmup_steps: 0log_level: passivelog_level_replica: warninglog_on_each_node: Truelogging_nan_inf_filter: Truesave_safetensors: Truesave_on_each_node: Falsesave_only_model: Falserestore_callback_states_from_checkpoint: Falseno_cuda: Falseuse_cpu: Falseuse_mps_device: Falseseed: 42data_seed: Nonejit_mode_eval: Falseuse_ipex: Falsebf16: Falsefp16: Truefp16_opt_level: O1half_precision_backend: autobf16_full_eval: Falsefp16_full_eval: Falsetf32: Nonelocal_rank: 0ddp_backend: Nonetpu_num_cores: Nonetpu_metrics_debug: Falsedebug: []dataloader_drop_last: Falsedataloader_num_workers: 0dataloader_prefetch_factor: Nonepast_index: -1disable_tqdm: Falseremove_unused_columns: Truelabel_names: Noneload_best_model_at_end: Falseignore_data_skip: Falsefsdp: []fsdp_min_num_params: 0fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap: Noneaccelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed: Nonelabel_smoothing_factor: 0.0optim: adamw_torchoptim_args: Noneadafactor: Falsegroup_by_length: Falselength_column_name: lengthddp_find_unused_parameters: Noneddp_bucket_cap_mb: Noneddp_broadcast_buffers: Falsedataloader_pin_memory: Truedataloader_persistent_workers: Falseskip_memory_metrics: Trueuse_legacy_prediction_loop: Falsepush_to_hub: Falseresume_from_checkpoint: Nonehub_model_id: Nonehub_strategy: every_savehub_private_repo: Nonehub_always_push: Falsegradient_checkpointing: Falsegradient_checkpointing_kwargs: Noneinclude_inputs_for_metrics: Falseinclude_for_metrics: []eval_do_concat_batches: Truefp16_backend: autopush_to_hub_model_id: Nonepush_to_hub_organization: Nonemp_parameters: auto_find_batch_size: Falsefull_determinism: Falsetorchdynamo: Noneray_scope: lastddp_timeout: 1800torch_compile: Falsetorch_compile_backend: Nonetorch_compile_mode: Nonedispatch_batches: Nonesplit_batches: Noneinclude_tokens_per_second: Falseinclude_num_input_tokens_seen: Falseneftune_noise_alpha: Noneoptim_target_modules: Nonebatch_eval_metrics: Falseeval_on_start: Falseuse_liger_kernel: Falseeval_use_gather_object: Falseaverage_tokens_across_devices: Falseprompts: Nonebatch_sampler: no_duplicatesmulti_dataset_batch_sampler: proportional| Epoch | Step | Training Loss | Validation Loss |
|---|---|---|---|
| 0.1471 | 5 | 0.0452 | - |
| 0.2941 | 10 | 0.0202 | - |
| 0.4412 | 15 | 0.0227 | - |
| 0.5882 | 20 | 0.0258 | - |
| 0.7353 | 25 | 0.0361 | - |
| 0.8824 | 30 | 0.03 | - |
| 1.0294 | 35 | 0.0246 | - |
| 1.1765 | 40 | 0.0036 | - |
| 1.3235 | 45 | 0.0019 | - |
| 1.4706 | 50 | 0.0021 | 0.0161 |
| 1.6176 | 55 | 0.0057 | - |
| 1.7647 | 60 | 0.0083 | - |
| 1.9118 | 65 | 0.0024 | - |
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Base model
answerdotai/ModernBERT-base