OpenS2V-Eval Leaderboard

Welcome to the leaderboard of the OpenS2V-Eval!

šŸ† OpenS2V-Eval is a core component of OpenS2V-Nexus, designed to establish a foundational infrastructure for Subject-to-Video (S2V) generation. It presents 180 prompts spanning seven major categories of S2V, incorporating both real and synthetic test data. To better align evaluation with human preferences, it introduce three new automatic metrics—NexusScore, NaturalScore, and GmeScore—that independently assess subject consistency, naturalness, and textual relevance in generated videos.

If you like our project, please give us a star ⭐ on GitHub for the latest update.

GitHub | Arxiv | Home Page | OpenS2V-Eval | OpenS2V-5M

In the table below, we use six dimensions as the primary evaluation metrics for each task.

  1. Visual Quality: Aesthetics.
  2. Motion Amplitude: Motion.
  3. Text Relevance: GmeScore.
  4. Subject Consistency: FaceSim and NexusScore.
Select options

Closed-Source

OpenS2V Team
54.46%
44.60%
41.60%
40.10%
66.20%
45.92%
79.06%