File size: 29,211 Bytes
413fcdd 85f5627 413fcdd 85f5627 413fcdd 85f5627 413fcdd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 |
---
license: other
license_name: nvidia-open-model-license
license_link: >-
https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license
library_name: cosmos
tags:
- nvidia
- nemo
- cosmos
extra_gated_prompt: >-
# NVIDIA Open Model License Agreement
Version Release Date: January 6, 2025
This NVIDIA Open Model License Agreement (the "<ins>Agreement</ins>") is a legal agreement between the Legal Entity You represent, or if no entity is identified, You and NVIDIA Corporation and its Affiliates ("<ins>NVIDIA</ins>") and governs Your use of the Models that NVIDIA provides to You under this Agreement. NVIDIA and You are each a "<ins>party</ins>" and collectively the "<ins>parties</ins>."
NVIDIA models released under this Agreement are intended to be used permissively and enable the further development of AI technologies. Subject to the terms of this Agreement, NVIDIA confirms that:
* Models are commercially usable.
* You are free to create and distribute Derivative Models.
* NVIDIA does not claim ownership to any outputs generated using the Models or Model Derivatives.
By using, reproducing, modifying, distributing, performing or displaying any portion or element of the Model or Derivative Model, or otherwise accepting the terms of this Agreement, you agree to be bound by this Agreement.
## 1. Definitions
The following definitions apply to this Agreement:
1.1. "<ins>NVIDIA Cosmos Model</ins>" means a multimodal Model shared under this Agreement.
1.2. "<ins>Derivative Model</ins>" means all (a) modifications to the Model, (b) works based on the Model, and (c) any other derivative works of the Model. An output is not a Derivative Model.
1.3. "<ins>Legal Entity</ins>" means the union of the acting entity and all other entities that <ins>control</ins>, are controlled by, or are under common control with that entity. For the purposes of this definition, "<ins>control</ins>" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of fifty percent (50%) or more of the outstanding shares, or (c) beneficial ownership of such entity.
1.4. "<ins>Model</ins>" means the machine learning model, software, checkpoints, learnt weights, algorithms, parameters, configuration files and documentation shared under this Agreement.
1.5. "<ins>You</ins>" or "<ins>Your</ins>" means an individual or Legal Entity exercising permissions granted by this Agreement.
## 2. Conditions for Use, License Grant, AI Ethics and IP Ownership
2.1. Conditions for Use. The Model and any Derivative Model are subject to additional terms as described in Section 2 and Section 3 of this Agreement and govern Your use. If You institute copyright or patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Model or a Derivative Model constitutes direct or contributory copyright or patent infringement, then any licenses granted to You under this Agreement for that Model or Derivative Model will terminate as of the date such litigation is filed. If You bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism contained in the Model, your rights under this Agreement will automatically terminate. NVIDIA may update this Agreement to comply with legal and regulatory requirements at any time and You agree to either comply with any updated license or cease Your copying, use, and distribution of the Model and any Derivative Model.
2.2. License Grant. The rights granted herein are explicitly conditioned on Your full compliance with the terms of this Agreement. Subject to the terms and conditions of this Agreement, NVIDIA hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, revocable (as stated in Section 2.1) license to publicly perform, publicly display, reproduce, use, create derivative works of, make, have made, sell, offer for sale, distribute (through multiple tiers of distribution) and import the Model.
2.3. AI Ethics. Use of the Models under the Agreement must be consistent with NVIDIA's Trustworthy AI terms found at https://www.nvidia.com/en-us/agreements/trustworthy-ai/terms/.
2.4. NVIDIA owns the Model and any Model Derivatives created by NVIDIA. Subject to NVIDIA's underlying ownership rights in the Model or its Model Derivatives, You are and will be the owner of Your Model Derivatives. NVIDIA claims no ownership rights in outputs. You are responsible for outputs and their subsequent uses. Except as expressly granted in this Agreement, (a) NVIDIA reserves all rights, interests and remedies in connection with the Model and (b) no other license or right is granted to you by implication, estoppel or otherwise.
## 3. Redistribution
You may reproduce and distribute copies of the Model or Derivative Models thereof in any medium, with or without modifications, provided that You meet the following conditions:
3.1. If you distribute the Model, You must give any other recipients of the Model a copy of this Agreement and include the following attribution notice within a "Notice" text file with such copies: "Licensed by NVIDIA Corporation under the NVIDIA Open Model License";
3.2. If you distribute or make available a NVIDIA Cosmos Model, or a product or service (including an AI model) that contains or uses a NVIDIA Cosmos Model, use a NVIDIA Cosmos Model to create a Derivative Model, or use a NVIDIA Cosmos Model or its outputs to create, train, fine tune, or otherwise improve an AI model, you will include "Built on NVIDIA Cosmos" on a related website, user interface, blogpost, about page, or product documentation; and
3.3. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Models as a whole, provided Your use, reproduction, and distribution of the Model otherwise complies with the conditions stated in this Agreement.
## 4. Trademarks
This Agreement does not grant permission to use the trade names, trademarks, service marks, or product names of NVIDIA, except as required for reasonable and customary use in describing the origin of the Model and reproducing the content of the "Notice" text file.
## **5. Disclaimer of Warranty**
**Unless required by applicable law or agreed to in writing, NVIDIA provides the Model on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Model, Derivative Models and outputs and assume any risks associated with Your exercise of permissions under this Agreement.**
## **6. Limitation of Liability**
**In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, will NVIDIA be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this Agreement or out of the use or inability to use the Model, Derivative Models or outputs (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if NVIDIA has been advised of the possibility of such damages.**
## 7. Indemnity
You will indemnify and hold harmless NVIDIA from and against any claim by any third party arising out of or related to your use or distribution of the Model, Model Derivatives or outputs.
## 8. Feedback
NVIDIA appreciates your feedback, and You agree that NVIDIA may use it without restriction or compensation to You.
## 9. Governing Law
This Agreement will be governed in all respects by the laws of the United States and the laws of the State of Delaware, without regard to conflict of laws principles or the United Nations Convention on Contracts for the International Sale of Goods. The state and federal courts residing in Santa Clara County, California will have exclusive jurisdiction over any dispute or claim arising out of or related to this Agreement, and the parties irrevocably consent to personal jurisdiction and venue in those courts; except that, either party may apply for injunctive remedies or an equivalent type of urgent legal relief in any jurisdiction.
## 10. Trade and Compliance
You agree to comply with all applicable export, import, trade and economic sanctions laws and regulations, as amended, including without limitation U.S. Export Administration Regulations and Office of Foreign Assets Control regulations. These laws include restrictions on destinations, end-users and end-use.
extra_gated_fields:
By clicking Submit below, I accept the terms of the NVIDIA Open Model License Agreement and acknowledge that I am an adult of legal age of majority in the country in which the Cosmos Models will be used and have authority to accept this Agreement: checkbox
extra_gated_description: >-
The information you provide will be collected, stored, processed and shared in accordance with the [NVIDIA Privacy Policy](https://www.nvidia.com/en-us/about-nvidia/privacy-policy/).
extra_gated_button_content: Submit
---
# **Cosmos-Embed1**: A joint video-text embedder for physical AI
[**Website**](https://research.nvidia.com/labs/dir/cosmos-embed1) | [**Hugging Face**](https://huggingface.co/collections/nvidia/cosmos-embed1-6833be2c9575cb89d6901504) | [**Demo app**](https://huggingface.co/spaces/nvidia/Cosmos-Embed1)
# Model Overview
## Description
**Cosmos-Embed1** is a joint video-text embedder tailored for physical AI. It can be used for text-to-video retrieval, inverse video search, semantic deduplication, zero-shot and k-nearest-neighbors (kNN) classification, and as a base model for video curation tasks. It has state-of-the-art (SOTA) performance on autonomous vehicle (AV) and robotics datasets, while maintaining competitive performance in general domains. This model is ready for commercial use.
**Model Developer**: NVIDIA
## Model Versions
The Cosmos-Embed1 release includes the following embedders:
* **Cosmos-Embed1**
* [Cosmos-Embed1-224p](https://huggingface.co/nvidia/Cosmos-Embed1-224p) (optimized with 8 frames and 224x224 input resolution, 256-dim output text and video embeddings)
* [Cosmos-Embed1-336p](https://huggingface.co/nvidia/Cosmos-Embed1-336p) (optimized with 8 frames and 336x336 input resolution, 768-dim output text and video embeddings)
* [Cosmos-Embed1-448p](https://huggingface.co/nvidia/Cosmos-Embed1-448p) (optimized with 8 frames and 448x448 input resolution, 768-dim output text and video embeddings)
Note, while each checkpoint was optimized at a specific fixed resolution (and default to these), they all support arbitrary non-square resolutions.
### License
This model is released under the [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). Additional Information: [Apache License 2.0](https://github.com/facebookresearch/perception_models/blob/main/LICENSE.PE); [MIT](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md).
For a custom license, please contact [[email protected]](mailto:[email protected]).
Under the NVIDIA Open Model License, NVIDIA confirms:
* Models are commercially usable.
* You are free to create and distribute Derivative Models.
* NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.
**Important Note**: If you bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or
associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism contained
in the Model, your rights under [NVIDIA Open Model License Agreement](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license) will automatically terminate.
### Deployment Geography
Global
### Use Case
Physical AI: encompassing robotics, autonomous vehicles (AV) etc.
### Release Date
* HuggingFace: [06/15/2025](https://huggingface.co/collections/nvidia/cosmos-embed1-6833be2c9575cb89d6901504)
## Model Architecture
The architecture is based on [QFormer](https://arxiv.org/abs/2301.12597), with modifications for processing video inputs.
The video embedder processes frames individually with a ViT backbone. The per-frame ViT features are concatenated in the temporal dimension and augmented with temporal embeddings. These are then passed into the QFormer which summarizes via cross-attention a compact set of visual query tokens from the provided frames. The visual query tokens are then pooled into a single video embedding. The text embedder processes tokenized text via the self-attention branch of the Qformer to produce a text embedding.
The normalized text and video embeddings are aligned via a contrastive video-text loss, as well as auxiliary losses such as video-text matching and video captioning. For the 336p and 448p variants, we additionally use summary and dense distillation losses.

## Input/Output Specifications
* **Input**
* **Input Type(s)**: Text+Video
* **Input Format(s)**:
* Text: UTF-8 string
* Video: tensor scaled from 0 to 1 of RGB frame sequences.
* **Input Parameters**:
* Text: One-dimensional (1D)
* Video: Three-dimensional (3D)
* **Other Properties Related to Input**:
* The input string will be truncated or padded to 128 text tokens. When used for text-to-video (T2V) retrieval, it should contain a short description of the object, scene or action of interest.
* Arbitrary, non-square resolutions are supported in the inference code. This can be configured at model loading time.
* The model architecture supports input videos of varying lengths, but it has been optimized for 8 frames, sampled at 1-2 frames per second (FPS).
* **Output**
* **Output Type(s)**: Text+Video
* **Output Format(s)**:
* Text: floating-point normalized vector of size 256 (for 224p variant) or 768 (for 336p and 448p variants).
* Video: floating-point normalized vector of size 256 (for 224p variant) or 768 (for 336p and 448p variants).
* **Output Parameters**:
* Text: One-dimensional (1D)
* Video: One-dimensional (1D)
* **Other Properties Related to Output**: Continuous-valued L2-normalized feature vectors with a dimensionality of 256 or 768. A distance can be calculated between embeddings using cosine distance. Dense intermediate feature maps are also provided for convenience.
Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.
## Software Integration
**Runtime Engine(s):**
* [PyTorch](https://github.com/pytorch/pytorch)
* [Transformer Engine](https://github.com/NVIDIA/TransformerEngine)
**Supported Hardware Microarchitecture Compatibility:**
* NVIDIA Ampere
* NVIDIA Hopper
* NVIDIA Blackwell
Note: We have only tested Cosmos-Embed1 with BF16 precision on Ampere and Hopper GPUs. If you are using older versions of NVIDIA GPUs (e.g., NVIDIA Volta GPUs), you may need to switch to FP32 precision.
**Operating System(s)**
* Linux (We have not tested on other operating systems.)
# Usage
## Installation
The main model and processor dependencies can be fetched with:
```
pip install transformers einops torch torchvision
```
One can optionally install Transformer Engine for faster inference:
```
pip install --no-build-isolation transformer_engine[pytorch]
```
## Example inference
A code snippet for video and text inference is shown below.
For a step-by-step guide, please refer to the Juypter notebook [here](https://huggingface.co/nvidia/Cosmos-Embed1-336p/blob/main/examples/example.ipynb).
```
import decord
import numpy as np
import torch
from transformers import AutoProcessor, AutoModel
import subprocess
import io
# load model and pre-processor
model = AutoModel.from_pretrained("nvidia/Cosmos-Embed1-224p", trust_remote_code=True).to("cuda", dtype=torch.bfloat16)
preprocess = AutoProcessor.from_pretrained("nvidia/Cosmos-Embed1-224p", trust_remote_code=True)
# load mock data
video_url = "https://upload.wikimedia.org/wikipedia/commons/3/3d/Branko_Paukovic%2C_javelin_throw.webm"
subprocess.check_call(["wget", "-O", "/tmp/javelin_throw.mp4", video_url])
reader = decord.VideoReader("/tmp/javelin_throw.mp4")
frame_ids = np.linspace(0, len(reader)-1, 8, dtype=int).tolist()
frames = reader.get_batch(frame_ids).asnumpy()
batch = np.transpose(np.expand_dims(frames, 0), (0, 1, 4, 2, 3)) # BTCHW
captions = [
"a person riding a motorcycle in the night",
"a car overtaking a white truck",
"a video of a knight fighting with a sword",
"a man wearing red spandex throwing a javelin",
"a young man javelin throwing during the evening", # distractor
"a man throwing a javelin with both hands", # distractor
]
# video and text processing
video_inputs = preprocess(videos=batch).to("cuda", dtype=torch.bfloat16)
video_out = model.get_video_embeddings(**video_inputs)
text_inputs = preprocess(text=captions).to("cuda", dtype=torch.bfloat16)
text_out = model.get_text_embeddings(**text_inputs)
# ranking and argmax
probs = (torch.softmax(model.logit_scale.exp() * video_out.visual_proj @ text_out.text_proj.T, dim=-1))[0]
print(captions[probs.argmax()])
```
# Training and Evaluation
We train and evaluate the **Cosmos-Embed1** models on a variety of video datasets covering zero-shot classification and retrieval.
We use public reference training/test splits when available (e.g. Kinetics), otherwise we take a 90/10% split. The training pool is approximately 8m unique videos with multiple curated captions, sampled from robotics, autonomous vehicle, activity recognition and general domains.
**Data Collection Method:**
* [AgiBot](https://huggingface.co/datasets/agibot-world/AgiBotWorld-Beta): Automatic/Sensors
* [BridgeV2](https://rail-berkeley.github.io/bridgedata/): Automatic/Sensors
* [Robonet](https://github.com/SudeepDasari/RoboNet/wiki/Getting-Started): Automatic/Sensors
* [DROID](https://droid-dataset.github.io): Automatic/Sensors
* [1X](https://huggingface.co/datasets/1x-technologies/world_model_raw_data): Automatic/Sensors
* [Kinetics-400/600/700](https://github.com/cvdfoundation/kinetics-dataset): Human
* [OpenDV](https://github.com/OpenDriveLab/DriveAGI#opendv-youtube): Automatic/Sensors
* AV action recognition (internal): Automatic/Sensors
* Curated captioned video dataset (internal): Human
**Labeling Method:**
* [AgiBot](https://huggingface.co/datasets/agibot-world/AgiBotWorld-Beta): Automated
* [BridgeV2](https://rail-berkeley.github.io/bridgedata/): Automated
* [Robonet](https://github.com/SudeepDasari/RoboNet/wiki/Getting-Started): Automated
* [DROID](https://droid-dataset.github.io): Automated
* [1X](https://huggingface.co/datasets/1x-technologies/world_model_raw_data): Automated
* [Kinetics-400/600/700](https://github.com/cvdfoundation/kinetics-dataset): Hybrid: Human, Automated
* [OpenDV](https://github.com/OpenDriveLab/DriveAGI#opendv-youtube): Automated
* AV action recognition (internal): Hybrid: Human, Automated
* Curated captioned video dataset (internal): Automated
## Metrics
* We compare with the state-of-the-art text/video embedders: [InternVideo2-1B](https://huggingface.co/collections/OpenGVLab/internvideo2-6618ccb574bd2f91410df5cd) and [Perception Encoder](https://github.com/facebookresearch/perception_models?tab=readme-ov-file). As input, we use 8 linearly spaced frames per clip and resize to square inputs.
* Evaluation metrics:
* (Class-Weighted) F1-score for text-video zero-shot classification
* Text-to-video (T2V) and video-to-text (V2T) recall at k=1 for multi-modal retrieval, including DSL rerank.
### Robotics
| | Agibot | | Bridge | |
|------------------------|----------------|---------------|----------------|---------------|
| Model Architecture | T2V-R@1 | V2T-R@1 | T2V-R@1 | V2T-R@1 |
|
| **InternVideo2-1B-S2** | 1.23 | 0.91 | 8.51 | 8.11 |
| **PE-Core-G14-448** | 1.16 | 0.83 | 7.19 | 5.24 |
| **Cosmos-Embed1-224** | **4.26** | **4.10** | **23.99** | **23.99** |
| **Cosmos-Embed1-336** | **7.04** | **6.33** | **24.51** | **22.90** |
| **Cosmos-Embed1-448** | **7.18** | **6.39** | **24.28** | **23.76** |
### AV
| | OpenDV | |
|------------------------|----------------|---------------|
| Model Architecture | T2V-R@1 | V2T-R@1 |
|
| **InternVideo2-1B-S2** | 7.40 | 8.06 |
| **PE-Core-G14-448** | 9.58 | 9.30 |
| **Cosmos-Embed1-224** | **30.11** | **30.99** |
| **Cosmos-Embed1-336** | **34.42** | **34.67** |
| **Cosmos-Embed1-448** | **34.66** | **34.87** |
### General and Action Recognition
| | Kinetics-400 (val) | Kinetics-600 (val) | Kinetics-700 (val) |
|-------------------------|---------|---------|---------|
| Model Architecture | F1 | F1 | F1 |
|
| **InternVideo2-1B-S2** | 62.80 | 60.20 | 52.18 |
| **PE-Core-G14-448** | 76.00 | 75.14 | 68.28 |
| **Cosmos-Embed1-224** | **83.06** | **82.22** | **70.96** |
| **Cosmos-Embed1-336** | **87.66** | **88.06** | **74.57** |
| **Cosmos-Embed1-448** | **88.21** | **88.60** | **75.27** |
## Inference
**Acceleration Engine**: PyTorch, Transformer Engine (optional)
**Test Hardware**: H100, A100
## Ethical Considerations
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.
For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below.
Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
### Plus Plus (++) Promise
We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been:
* Verified to comply with current applicable disclosure laws, regulations, and industry standards.
* Verified to comply with applicable privacy labeling requirements.
* Annotated to describe the collector/source (NVIDIA or a third-party).
* Characterized for technical limitations.
* Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests.
* Reviewed before release.
* Tagged for known restrictions and potential safety implications.
### Bias
Field | Response
:---------------------------------------------------------------------------------------------------|:---------------
Participation considerations from adversely impacted groups [protected classes](https://www.senate.ca.gov/content/protected-classes) in model design and testing: | None
Measures taken to mitigate against unwanted bias: | None
### Explainability
Field | Response
:------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------
Intended Application & Domain: | Embedding of text and videos for physical AI
Model Type: | ViT, QFormer
Intended Users: | Physical AI developers
Output: | Text/Video embedding vectors
Describe how the model works: | Projects inputs into aligned embedding space (text/video).
Technical Limitations: | Due to the training datasets being predominantly composed of short action-focused English captions, different text prompts may not be properly aligned to video data, producing suboptimal matching.
Verified to have met prescribed NVIDIA quality standards: | Yes
Performance Metrics: | Classification metrics (F1-score, accuracy), retrieval metrics (T2V recall@1, V2T recall@1)
Potential Known Risks: | Embedder's output can include all forms of input, including what may be considered toxic, offensive, or indecent.
Licensing: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). Additional Information: [Apache License 2.0](https://github.com/facebookresearch/perception_models/blob/main/LICENSE.PE); [MIT](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md).
### Privacy
Field | Response
:----------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------
Generatable or reverse engineerable personal information? | No
How often is dataset reviewed? | Before Release
Is there provenance for all datasets used in training? | Yes
Does data labeling (annotation, metadata) comply with privacy laws? | Yes
Applicable Privacy Policy | https://www.nvidia.com/en-us/about-nvidia/privacy-policy
### Safety
Field | Response
:---------------------------------------------------|:----------------------------------
Model Application(s): | Embedding of text and videos for physical AI applications (robotics, autonomous vehicles).
Describe the life critical impact (if present). | None Known
Use Case Restrictions: | [NVIDIA Open Model License](https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license). Additional Information: [Apache License 2.0](https://github.com/facebookresearch/perception_models/blob/main/LICENSE.PE); [MIT](https://huggingface.co/datasets/choosealicense/licenses/blob/main/markdown/mit.md).
Model and dataset restrictions: | The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog.
|