Back to datasets
Dataset assetOpen Source CommunityComputer VisionMulti‑Camera Synchronization

SynCamVideo Dataset

SynCamVideo Dataset is a multi‑camera synchronized video dataset rendered with Unreal Engine 5. It comprises 1,000 distinct scenes, each captured by 36 cameras, resulting in a total of 36,000 videos. The dataset features 50 different animal species as primary objects and uses 20 locations from Poly Haven as backgrounds. In each scene, 1–2 animals are selected from the 50 species and moved along predefined trajectories while the background is randomly chosen from the 20 locations, with all 36 cameras recording the motion simultaneously.

Source
github
Created
Dec 6, 2024
Updated
Dec 11, 2024
Signals
186 views
Availability
Linked source ready
Overview

Dataset description and usage context

SynCamVideo Dataset

1. Dataset Introduction

SynCamVideo Dataset is a multi‑camera synchronized video dataset rendered with Unreal Engine 5. The dataset contains 1,000 distinct scenes, each captured by 36 cameras, for a total of 36,000 videos. Main characteristics include:

  • Primary Objects: 50 different animal species.
  • Backgrounds: 20 different locations sourced from Poly Haven.
  • Scene Setup: In each scene, 1–2 animals act as primary objects, follow predefined trajectories, the background is randomly selected from the 20 locations, and all 36 cameras record the objects' motion simultaneously.

Cameras are placed on a hemispherical surface around each scene at distances ranging from 3.5 m to 9 m from the scene centre. To minimise domain shift between rendered and real‑world videos, camera elevation angles are limited to 0°–45° and azimuth angles to 0°–360°. Each camera position is randomly sampled within these constraints per scene rather than using a fixed set across all scenes.

2. File Structure

SynCamVideo ├── train │ ├── videos # training videos │ │ ├── scene1 # a single scene │ │ │ ├── xxx.mp4 # synchronized 100‑frame video, 480×720 resolution │ │ │ └── ... │ │ ... │ │ └── scene1000 │ │ ├── xxx.mp4 │ │ └── ... │ └── cameras # training camera parameters │ ├── scene1 # a single scene │ │ └── xxx.json # extrinsic parameters for the videos │ │ ... │ └── scene1000 │ └── xxx.json └── val └── cameras # validation cameras ├── Hemi36_4m_0 # distance=4 m, elevation=0° │ └── Hemi36_4m_0.json # 36 cameras: distance=4 m, elevation=0°, azimuth=i * 10° │ ... └── Hemi36_7m_45 └── Hemi36_7m_45.json

3. Useful Scripts

  • Camera Visualization
python vis_cam.py --pose_file_path ./val/cameras/Hemi36_4m_0/Hemi36_4m_0_transforms.json --num_cameras 36

The visualization script is adapted from CameraCtrl.

4. Dataset Applications

SynCamVideo Dataset can be used to train multi‑camera synchronized video generation models, which are applicable to downstream tasks such as film production and multi‑view data generation.

Need downstream help?

Pair the dataset with AI analysis and content workflows.

Once the source passes your review, move straight into summarization, transformation, report drafting, or presentation generation with the JuheAI toolchain.

Explore AI studio