
OpenS2V-Eval Leaderboard
Welcome to the leaderboard of the OpenS2V-Eval!
š OpenS2V-Eval is a core component of OpenS2V-Nexus, designed to establish a foundational infrastructure for Subject-to-Video (S2V) generation. It presents 180 prompts spanning seven major categories of S2V, incorporating both real and synthetic test data. To better align evaluation with human preferences, it introduce three new automatic metricsāNexusScore, NaturalScore, and GmeScoreāthat independently assess subject consistency, naturalness, and textual relevance in generated videos.
If you like our project, please give us a star ā on GitHub for the latest update.
GitHub | Arxiv | Home Page | OpenS2V-Eval | OpenS2V-5M
In the table below, we use six dimensions as the primary evaluation metrics for each task.
- Visual Quality: Aesthetics.
- Motion Amplitude: Motion.
- Text Relevance: GmeScore.
- Subject Consistency: FaceSim and NexusScore.
Closed-Source | OpenS2V Team | 54.46% | 44.60% | 41.60% | 40.10% | 66.20% | 45.92% | 79.06% |
In the table below, we use six dimensions as the primary evaluation metrics for each task.
- Visual Quality: Aesthetics.
- Motion Amplitude: Motion.
- Text Relevance: GmeScore.
- Subject Consistency: FaceSim.
Closed-Source | OpenS2V Team | 60.20% | 52.75% | 31.83% | 57.79% | 71.42% | 74.52% |
Closed-Source | OpenS2V Team | 60.20% | 52.75% | 31.83% | 57.79% | 71.42% | 74.52% | |
Closed-Source | OpenS2V Team | 59.13% | 50.94% | 50.55% | 41.02% | 67.79% | 78.28% | |
Open-Source | OpenS2V Team | 58.69% | 49.14% | 41.24% | 55.02% | 72.55% | 68.33% | |
Open-Source | OpenS2V Team | 58.57% | 52.78% | 11.76% | 64.65% | 69.53% | 74.33% | |
Open-Source | OpenS2V Team | 55.85% | 49.67% | 15.13% | 62.25% | 69.78% | 67.00% | |
Open-Source | OpenS2V Team | 54.52% | 39.93% | 35.16% | 48.57% | 68.40% | 69.22% | |
Open-Source | OpenS2V Team | 54.27% | 39.88% | 31.98% | 55.02% | 63.63% | 67.33% | |
Open-Source | OpenS2V Team | 53.64% | 50.80% | 14.14% | 46.30% | 72.17% | 71.67% | |
Open-Source | OpenS2V Team | 53.32% | 44.13% | 31.76% | 43.83% | 73.67% | 66.44% | |
Open-Source | OpenS2V Team | 52.97% | 41.76% | 38.12% | 43.14% | 72.03% | 64.67% | |
Closed-Source | OpenS2V Team | 52.56% | 52.39% | 28.94% | 29.41% | 75.03% | 72.53% | |
Open-Source | OpenS2V Team | 52.31% | 31.76% | 50.09% | 76.45% | 45.28% | 47.08% | |
Closed-Source | OpenS2V Team | 51.11% | 47.33% | 14.80% | 38.50% | 70.42% | 71.99% | |
Open-Source | OpenS2V Team | 49.80% | 45.60% | 23.48% | 32.42% | 72.68% | 68.11% | |
Open-Source | OpenS2V Team | 49.02% | 53.18% | 16.87% | 22.29% | 73.61% | 73.00% | |
Open-Source | OpenS2V Team | 46.28% | 51.45% | 8.78% | 19.98% | 73.27% | 70.89% | |
Open-Source | OpenS2V Team | 43.37% | 42.03% | 33.54% | 31.56% | 52.91% | 54.03% |
In the table below, we use six dimensions as the primary evaluation metrics for each task.
- Visual Quality: Aesthetics.
- Motion Amplitude: Motion.
- Text Relevance: GmeScore.
- Subject Consistency: FaceSim and NexusScore.
Closed-Source | OpenS2V Team | 58.00% | 41.30% | 35.54% | 64.65% | 58.55% | 51.33% | 77.33% |
Open-Source | OpenS2V Team | 58.00% | 41.30% | 35.54% | 64.65% | 58.55% | 51.33% | 77.33% | |
Open-Source | OpenS2V Team | 53.17% | 47.46% | 41.55% | 51.82% | 70.07% | 35.35% | 69.35% | |
Closed-Source | OpenS2V Team | 53.12% | 35.63% | 36.40% | 39.26% | 61.99% | 48.24% | 81.40% | |
Open-Source | OpenS2V Team | 51.64% | 33.83% | 21.60% | 54.42% | 61.93% | 48.63% | 70.60% | |
Open-Source | OpenS2V Team | 51.64% | 34.08% | 26.83% | 55.93% | 54.31% | 50.75% | 68.66% | |
Open-Source | OpenS2V Team | 49.95% | 42.98% | 19.30% | 44.03% | 65.61% | 37.78% | 76.00% | |
Closed-Source | OpenS2V Team | 48.93% | 38.64% | 31.90% | 32.94% | 62.19% | 47.34% | 70.60% | |
Closed-Source | OpenS2V Team | 48.67% | 34.78% | 24.40% | 36.20% | 65.56% | 45.20% | 72.60% | |
Open-Source | OpenS2V Team | 47.33% | 41.81% | 33.78% | 22.38% | 65.35% | 38.52% | 76.00% | |
Open-Source | OpenS2V Team | 44.28% | 42.58% | 18.00% | 18.02% | 65.93% | 36.26% | 76.00% |
Submission Guidelines
- Fill in 'Model Name' if it is your first time to submit your result or Fill in 'Revision Model Name' if you want to update your result.
- Fill in your home page to 'Model Link' and your team name to 'Your Team Name'.
- After evaluation, follow the guidance in the github repository to obtain
model_name.json
and upload it here. - Click the 'Submit Eval' button.
- Click 'Refresh' to obtain the updated leaderboard.