Back to datasets
Dataset assetOpen Source CommunityBehavior AnalysisTraffic Safety
WTS Dataset
Woven Traffic Safety (WTS) is provided by Woven by Toyota, Inc. to highlight detailed vehicle and pedestrian behaviors in various simulated traffic events, including accidents. The dataset contains over 1.2 k video events across more than 130 traffic scenarios, combining vehicle‑ego and fixed‑camera views. Each event includes a comprehensive textual description of observed behaviors and context. Additionally, about 4.8 k pedestrian‑related traffic videos from BDD100K are supplied with detailed textual annotations for external training and testing.
Source
github
Created
Jan 18, 2024
Updated
Apr 13, 2024
Signals
444 views
Availability
Linked source ready
Overview
Dataset description and usage context
WTS Dataset Overview
Dataset Introduction
- Name: WTS: A Pedestrian‑Centric Traffic Video Dataset for Fine‑grained Spatial‑Temporal Understanding
- Source: Woven by Toyota, Inc.
- Purpose: Emphasize detailed vehicle and pedestrian behaviors in various traffic events, including accidents.
- Content: Over 1.2 k video events spanning more than 130 distinct traffic scenes, combining vehicle‑ego and fixed‑high‑angle camera views.
- Additional Resources: Approximately 4.8 k pedestrian‑focused traffic videos from BDD100K with detailed textual annotations.
Dataset Structure
- Video Data: Stored in the
videosfolder, containing both real‑world WTS videos and the pedestrian‑centric subset (BDD_PC_5K) filtered from BDD100K. - Annotations: Provide BBox annotations for target pedestrians and vehicles, plus detailed scene descriptions covering position, attention, action, and context.
- Future Updates: Will add 3D gaze and position annotations.
Data Preparation
- BBox Annotations: Frame‑level annotations are supplied; a script
frame_extraction.pyis provided to extract frames matching annotation IDs. - Shared Annotations: Videos within the same scenario folder share the same top‑level annotations.
Evaluation
- Validation Set: A validation split for video‑to‑text tasks is provided, mirroring the training folder structure under
val. - Submission Format: For AI City Challenge 2024 Track 2, submissions should be a JSON file containing predictions for all test videos.
License and Citation
- License: Refer to the WTS homepage for licensing details.
- Citation: If the dataset benefits your research, please cite the associated publication.
Update List
- Completed: View list, first‑frame BBox annotations, generated BBoxes, evaluation code.
- Pending: 3D gaze annotations, dataset arXiv paper.
Need downstream help?
Pair the dataset with AI analysis and content workflows.
Once the source passes your review, move straight into summarization, transformation, report drafting, or presentation generation with the JuheAI toolchain.