Cosmos
Safetensors
NeMo
cosmos-embed1
nvidia
custom_code
Cosmos-Embed1-224p / README.md
harrim-nv's picture
Update README.md
85f5627 verified
metadata
license: other
license_name: nvidia-open-model-license
license_link: >-
  https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license
library_name: cosmos
tags:
  - nvidia
  - nemo
  - cosmos
extra_gated_prompt: >-
  # NVIDIA Open Model License Agreement

  Version Release Date: January 6, 2025

  This NVIDIA Open Model License Agreement (the "<ins>Agreement</ins>") is a
  legal agreement between the Legal Entity You represent, or if no entity is
  identified, You and NVIDIA Corporation and its Affiliates
  ("<ins>NVIDIA</ins>") and governs Your use of the Models that NVIDIA provides
  to You under this Agreement. NVIDIA and You are each a "<ins>party</ins>" and
  collectively the "<ins>parties</ins>."

  NVIDIA models released under this Agreement are intended to be used
  permissively and enable the further development of AI technologies. Subject to
  the terms of this Agreement, NVIDIA confirms that:

  * Models are commercially usable.

  * You are free to create and distribute Derivative Models.

  * NVIDIA does not claim ownership to any outputs generated using the Models or
  Model Derivatives.

  By using, reproducing, modifying, distributing, performing or displaying any
  portion or element of the Model or Derivative Model, or otherwise accepting
  the terms of this Agreement, you agree to be bound by this Agreement.

  ## 1. Definitions

  The following definitions apply to this Agreement:

    1.1. "<ins>NVIDIA Cosmos Model</ins>" means a multimodal Model shared under this Agreement.

    1.2. "<ins>Derivative Model</ins>" means all (a) modifications to the Model, (b) works based on the Model, and (c) any other derivative works of the Model. An output is not a Derivative Model.

    1.3. "<ins>Legal Entity</ins>" means the union of the acting entity and all other entities that <ins>control</ins>, are controlled by, or are under common control with that entity. For the purposes of this definition, "<ins>control</ins>" means (a) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (b) ownership of fifty percent (50%) or more of the outstanding shares, or (c) beneficial ownership of such entity.

    1.4. "<ins>Model</ins>" means the machine learning model, software, checkpoints, learnt weights, algorithms, parameters, configuration files and documentation shared under this Agreement.

    1.5. "<ins>You</ins>" or "<ins>Your</ins>" means an individual or Legal Entity exercising permissions granted by this Agreement.

  ## 2. Conditions for Use, License Grant, AI Ethics and IP Ownership

    2.1. Conditions for Use. The Model and any Derivative Model are subject to additional terms as described in Section 2 and Section 3 of this Agreement and govern Your use. If You institute copyright or patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Model or a Derivative Model constitutes direct or contributory copyright or patent infringement, then any licenses granted to You under this Agreement for that Model or Derivative Model will terminate as of the date such litigation is filed. If You bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism contained in the Model, your rights under this Agreement will automatically terminate. NVIDIA may update this Agreement to comply with legal and regulatory requirements at any time and You agree to either comply with any updated license or cease Your copying, use, and distribution of the Model and any Derivative Model.

    2.2. License Grant. The rights granted herein are explicitly conditioned on Your full compliance with the terms of this Agreement. Subject to the terms and conditions of this Agreement, NVIDIA hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, revocable (as stated in Section 2.1) license to publicly perform, publicly display, reproduce, use, create derivative works of, make, have made, sell, offer for sale, distribute (through multiple tiers of distribution) and import the Model.

    2.3. AI Ethics. Use of the Models under the Agreement must be consistent with NVIDIA's Trustworthy AI terms found at https://www.nvidia.com/en-us/agreements/trustworthy-ai/terms/.

    2.4. NVIDIA owns the Model and any Model Derivatives created by NVIDIA. Subject to NVIDIA's underlying ownership rights in the Model or its Model Derivatives, You are and will be the owner of Your Model Derivatives. NVIDIA claims no ownership rights in outputs. You are responsible for outputs and their subsequent uses. Except as expressly granted in this Agreement, (a) NVIDIA reserves all rights, interests and remedies in connection with the Model and (b) no other license or right is granted to you by implication, estoppel or otherwise.

  ## 3. Redistribution

  You may reproduce and distribute copies of the Model or Derivative Models
  thereof in any medium, with or without modifications, provided that You meet
  the following conditions:

    3.1. If you distribute the Model, You must give any other recipients of the Model a copy of this Agreement and include the following attribution notice within a "Notice" text file with such copies: "Licensed by NVIDIA Corporation under the NVIDIA Open Model License";

    3.2. If you distribute or make available a NVIDIA Cosmos Model, or a product or service (including an AI model) that contains or uses a NVIDIA Cosmos Model, use a NVIDIA Cosmos Model to create a Derivative Model, or use a NVIDIA Cosmos Model or its outputs to create, train, fine tune, or otherwise improve an AI model, you will include "Built on NVIDIA Cosmos" on a related website, user interface, blogpost, about page, or product documentation; and

    3.3. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Models as a whole, provided Your use, reproduction, and distribution of the Model otherwise complies with the conditions stated in this Agreement.

  ## 4. Trademarks

  This Agreement does not grant permission to use the trade names, trademarks,
  service marks, or product names of NVIDIA, except as required for reasonable
  and customary use in describing the origin of the Model and reproducing the
  content of the "Notice" text file.

  ## **5. Disclaimer of Warranty**

  **Unless required by applicable law or agreed to in writing, NVIDIA provides
  the Model on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
  either express or implied, including, without limitation, any warranties or
  conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
  PARTICULAR PURPOSE. You are solely responsible for determining the
  appropriateness of using or redistributing the Model, Derivative Models and
  outputs and assume any risks associated with Your exercise of permissions
  under this Agreement.**

  ## **6. Limitation of Liability**

  **In no event and under no legal theory, whether in tort (including
  negligence), contract, or otherwise, unless required by applicable law (such
  as deliberate and grossly negligent acts) or agreed to in writing, will NVIDIA
  be liable to You for damages, including any direct, indirect, special,
  incidental, or consequential damages of any character arising as a result of
  this Agreement or out of the use or inability to use the Model, Derivative
  Models or outputs (including but not limited to damages for loss of goodwill,
  work stoppage, computer failure or malfunction, or any and all other
  commercial damages or losses), even if NVIDIA has been advised of the
  possibility of such damages.**

  ## 7. Indemnity

  You will indemnify and hold harmless NVIDIA from and against any claim by any
  third party arising out of or related to your use or distribution of the
  Model, Model Derivatives or outputs.

  ## 8. Feedback

  NVIDIA appreciates your feedback, and You agree that NVIDIA may use it without
  restriction or compensation to You.

  ## 9. Governing Law

  This Agreement will be governed in all respects by the laws of the United
  States and the laws of the State of Delaware, without regard to conflict of
  laws principles or the United Nations Convention on Contracts for the
  International Sale of Goods. The state and federal courts residing in Santa
  Clara County, California will have exclusive jurisdiction over any dispute or
  claim arising out of or related to this Agreement, and the parties irrevocably
  consent to personal jurisdiction and venue in those courts; except that,
  either party may apply for injunctive remedies or an equivalent type of urgent
  legal relief in any jurisdiction.

  ## 10. Trade and Compliance

  You agree to comply with all applicable export, import, trade and economic
  sanctions laws and regulations, as amended, including without limitation U.S.
  Export Administration Regulations and Office of Foreign Assets Control
  regulations. These laws include restrictions on destinations, end-users and
  end-use.
extra_gated_fields:
  By clicking Submit below, I accept the terms of the NVIDIA Open Model License Agreement and acknowledge that I am an adult of legal age of majority in the country in which the Cosmos Models will be used and have authority to accept this Agreement: checkbox
extra_gated_description: >-
  The information you provide will be collected, stored, processed and shared in
  accordance with the [NVIDIA Privacy
  Policy](https://www.nvidia.com/en-us/about-nvidia/privacy-policy/).
extra_gated_button_content: Submit

Cosmos-Embed1: A joint video-text embedder for physical AI

Website | Hugging Face | Demo app

Model Overview

Description

Cosmos-Embed1 is a joint video-text embedder tailored for physical AI. It can be used for text-to-video retrieval, inverse video search, semantic deduplication, zero-shot and k-nearest-neighbors (kNN) classification, and as a base model for video curation tasks. It has state-of-the-art (SOTA) performance on autonomous vehicle (AV) and robotics datasets, while maintaining competitive performance in general domains. This model is ready for commercial use.

Model Developer: NVIDIA

Model Versions

The Cosmos-Embed1 release includes the following embedders:

  • Cosmos-Embed1
    • Cosmos-Embed1-224p (optimized with 8 frames and 224x224 input resolution, 256-dim output text and video embeddings)
    • Cosmos-Embed1-336p (optimized with 8 frames and 336x336 input resolution, 768-dim output text and video embeddings)
    • Cosmos-Embed1-448p (optimized with 8 frames and 448x448 input resolution, 768-dim output text and video embeddings)

Note, while each checkpoint was optimized at a specific fixed resolution (and default to these), they all support arbitrary non-square resolutions.

License

This model is released under the NVIDIA Open Model License. Additional Information: Apache License 2.0; MIT.

For a custom license, please contact [email protected].

Under the NVIDIA Open Model License, NVIDIA confirms:

  • Models are commercially usable.
  • You are free to create and distribute Derivative Models.
  • NVIDIA does not claim ownership to any outputs generated using the Models or Derivative Models.

Important Note: If you bypass, disable, reduce the efficacy of, or circumvent any technical limitation, safety guardrail or associated safety guardrail hyperparameter, encryption, security, digital rights management, or authentication mechanism contained in the Model, your rights under NVIDIA Open Model License Agreement will automatically terminate.

Deployment Geography

Global

Use Case

Physical AI: encompassing robotics, autonomous vehicles (AV) etc.

Release Date

Model Architecture

The architecture is based on QFormer, with modifications for processing video inputs.

The video embedder processes frames individually with a ViT backbone. The per-frame ViT features are concatenated in the temporal dimension and augmented with temporal embeddings. These are then passed into the QFormer which summarizes via cross-attention a compact set of visual query tokens from the provided frames. The visual query tokens are then pooled into a single video embedding. The text embedder processes tokenized text via the self-attention branch of the Qformer to produce a text embedding.

The normalized text and video embeddings are aligned via a contrastive video-text loss, as well as auxiliary losses such as video-text matching and video captioning. For the 336p and 448p variants, we additionally use summary and dense distillation losses.

image/jpeg

Input/Output Specifications

  • Input

    • Input Type(s): Text+Video
    • Input Format(s):
      • Text: UTF-8 string
      • Video: tensor scaled from 0 to 1 of RGB frame sequences.
    • Input Parameters:
      • Text: One-dimensional (1D)
      • Video: Three-dimensional (3D)
    • Other Properties Related to Input:
      • The input string will be truncated or padded to 128 text tokens. When used for text-to-video (T2V) retrieval, it should contain a short description of the object, scene or action of interest.
      • Arbitrary, non-square resolutions are supported in the inference code. This can be configured at model loading time.
      • The model architecture supports input videos of varying lengths, but it has been optimized for 8 frames, sampled at 1-2 frames per second (FPS).
  • Output

    • Output Type(s): Text+Video
    • Output Format(s):
      • Text: floating-point normalized vector of size 256 (for 224p variant) or 768 (for 336p and 448p variants).
      • Video: floating-point normalized vector of size 256 (for 224p variant) or 768 (for 336p and 448p variants).
    • Output Parameters:
      • Text: One-dimensional (1D)
      • Video: One-dimensional (1D)
    • Other Properties Related to Output: Continuous-valued L2-normalized feature vectors with a dimensionality of 256 or 768. A distance can be calculated between embeddings using cosine distance. Dense intermediate feature maps are also provided for convenience.

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration

Runtime Engine(s):

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere
  • NVIDIA Hopper
  • NVIDIA Blackwell

Note: We have only tested Cosmos-Embed1 with BF16 precision on Ampere and Hopper GPUs. If you are using older versions of NVIDIA GPUs (e.g., NVIDIA Volta GPUs), you may need to switch to FP32 precision.

Operating System(s)

  • Linux (We have not tested on other operating systems.)

Usage

Installation

The main model and processor dependencies can be fetched with:

pip install transformers einops torch torchvision

One can optionally install Transformer Engine for faster inference:

pip install --no-build-isolation transformer_engine[pytorch]

Example inference

A code snippet for video and text inference is shown below. For a step-by-step guide, please refer to the Juypter notebook here.

import decord
import numpy as np
import torch
from transformers import AutoProcessor, AutoModel
import subprocess
import io

# load model and pre-processor
model = AutoModel.from_pretrained("nvidia/Cosmos-Embed1-224p", trust_remote_code=True).to("cuda", dtype=torch.bfloat16)
preprocess = AutoProcessor.from_pretrained("nvidia/Cosmos-Embed1-224p", trust_remote_code=True)

# load mock data
video_url = "https://upload.wikimedia.org/wikipedia/commons/3/3d/Branko_Paukovic%2C_javelin_throw.webm"
subprocess.check_call(["wget", "-O", "/tmp/javelin_throw.mp4", video_url])
reader = decord.VideoReader("/tmp/javelin_throw.mp4")
frame_ids = np.linspace(0, len(reader)-1, 8, dtype=int).tolist()
frames = reader.get_batch(frame_ids).asnumpy()
batch = np.transpose(np.expand_dims(frames, 0), (0, 1, 4, 2, 3))  # BTCHW
captions = [
    "a person riding a motorcycle in the night",
    "a car overtaking a white truck",
    "a video of a knight fighting with a sword",
    "a man wearing red spandex throwing a javelin",
    "a young man javelin throwing during the evening", # distractor
    "a man throwing a javelin with both hands", # distractor
]

# video and text processing
video_inputs = preprocess(videos=batch).to("cuda", dtype=torch.bfloat16)
video_out = model.get_video_embeddings(**video_inputs)
text_inputs = preprocess(text=captions).to("cuda", dtype=torch.bfloat16)
text_out = model.get_text_embeddings(**text_inputs)

# ranking and argmax
probs = (torch.softmax(model.logit_scale.exp() * video_out.visual_proj @ text_out.text_proj.T, dim=-1))[0]
print(captions[probs.argmax()])

Training and Evaluation

We train and evaluate the Cosmos-Embed1 models on a variety of video datasets covering zero-shot classification and retrieval. We use public reference training/test splits when available (e.g. Kinetics), otherwise we take a 90/10% split. The training pool is approximately 8m unique videos with multiple curated captions, sampled from robotics, autonomous vehicle, activity recognition and general domains.

Data Collection Method:

  • AgiBot: Automatic/Sensors
  • BridgeV2: Automatic/Sensors
  • Robonet: Automatic/Sensors
  • DROID: Automatic/Sensors
  • 1X: Automatic/Sensors
  • Kinetics-400/600/700: Human
  • OpenDV: Automatic/Sensors
  • AV action recognition (internal): Automatic/Sensors
  • Curated captioned video dataset (internal): Human

Labeling Method:

Metrics

  • We compare with the state-of-the-art text/video embedders: InternVideo2-1B and Perception Encoder. As input, we use 8 linearly spaced frames per clip and resize to square inputs.
  • Evaluation metrics:
    • (Class-Weighted) F1-score for text-video zero-shot classification
    • Text-to-video (T2V) and video-to-text (V2T) recall at k=1 for multi-modal retrieval, including DSL rerank.

Robotics

Agibot Bridge
Model Architecture T2V-R@1 V2T-R@1 T2V-R@1 V2T-R@1
InternVideo2-1B-S2 1.23 0.91 8.51 8.11
PE-Core-G14-448 1.16 0.83 7.19 5.24
Cosmos-Embed1-224 4.26 4.10 23.99 23.99
Cosmos-Embed1-336 7.04 6.33 24.51 22.90
Cosmos-Embed1-448 7.18 6.39 24.28 23.76

AV

OpenDV
Model Architecture T2V-R@1 V2T-R@1
InternVideo2-1B-S2 7.40 8.06
PE-Core-G14-448 9.58 9.30
Cosmos-Embed1-224 30.11 30.99
Cosmos-Embed1-336 34.42 34.67
Cosmos-Embed1-448 34.66 34.87

General and Action Recognition

Kinetics-400 (val) Kinetics-600 (val) Kinetics-700 (val)
Model Architecture F1 F1 F1
InternVideo2-1B-S2 62.80 60.20 52.18
PE-Core-G14-448 76.00 75.14 68.28
Cosmos-Embed1-224 83.06 82.22 70.96
Cosmos-Embed1-336 87.66 88.06 74.57
Cosmos-Embed1-448 88.21 88.60 75.27

Inference

Acceleration Engine: PyTorch, Transformer Engine (optional)

Test Hardware: H100, A100

Ethical Considerations

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Users are responsible for model inputs and outputs. Users are responsible for ensuring safe integration of this model, including implementing guardrails as well as other safety mechanisms, prior to deployment.

For more detailed information on ethical considerations for this model, please see the subcards of Explainability, Bias, Safety & Security, and Privacy below.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Plus Plus (++) Promise

We value you, the datasets, the diversity they represent, and what we have been entrusted with. This model and its associated data have been:

  • Verified to comply with current applicable disclosure laws, regulations, and industry standards.
  • Verified to comply with applicable privacy labeling requirements.
  • Annotated to describe the collector/source (NVIDIA or a third-party).
  • Characterized for technical limitations.
  • Reviewed to ensure proper disclosure is accessible to, maintained for, and in compliance with NVIDIA data subjects and their requests.
  • Reviewed before release.
  • Tagged for known restrictions and potential safety implications.

Bias

Field Response
Participation considerations from adversely impacted groups protected classes in model design and testing: None
Measures taken to mitigate against unwanted bias: None

Explainability

Field Response
Intended Application & Domain: Embedding of text and videos for physical AI
Model Type: ViT, QFormer
Intended Users: Physical AI developers
Output: Text/Video embedding vectors
Describe how the model works: Projects inputs into aligned embedding space (text/video).
Technical Limitations: Due to the training datasets being predominantly composed of short action-focused English captions, different text prompts may not be properly aligned to video data, producing suboptimal matching.
Verified to have met prescribed NVIDIA quality standards: Yes
Performance Metrics: Classification metrics (F1-score, accuracy), retrieval metrics (T2V recall@1, V2T recall@1)
Potential Known Risks: Embedder's output can include all forms of input, including what may be considered toxic, offensive, or indecent.
Licensing: NVIDIA Open Model License. Additional Information: Apache License 2.0; MIT.

Privacy

Field Response
Generatable or reverse engineerable personal information? No
How often is dataset reviewed? Before Release
Is there provenance for all datasets used in training? Yes
Does data labeling (annotation, metadata) comply with privacy laws? Yes
Applicable Privacy Policy https://www.nvidia.com/en-us/about-nvidia/privacy-policy

Safety

Field Response
Model Application(s): Embedding of text and videos for physical AI applications (robotics, autonomous vehicles).
Describe the life critical impact (if present). None Known
Use Case Restrictions: NVIDIA Open Model License. Additional Information: Apache License 2.0; MIT.
Model and dataset restrictions: The Principle of least privilege (PoLP) is applied limiting access for dataset generation and model development. Restrictions enforce dataset access during training, and dataset license constraints adhered to. Model checkpoints are made available on Hugging Face, and may become available on cloud providers' model catalog.