Back to datasets
Dataset assetOpen Source CommunityVideo Question AnsweringFirst-Person Perspective
egoqa
The Ego‑QA‑19k dataset contains 19k video‑question‑answer pairs in first‑person view scenarios. Dataset creation involved two stages: first, video subtitles were concatenated chronologically to generate video descriptions, then GPT‑4 generated 20 questions per video; second, questions containing specific cue words were filtered out, and graduate‑level native English speakers ensured question authenticity and the required video length to answer each question.
Source
huggingface
Created
Oct 5, 2024
Updated
Oct 5, 2024
Signals
287 views
Availability
Linked source ready
Overview
Dataset description and usage context
Ego‑QA‑19k
Overview
- Dataset Name: Ego‑QA‑19k
- Source: EMNLP 2024 paper Encoding and Controlling Global Semantics for Long‑form Video Question Answering
- Task Category: Question Answering
- Language: English
- Data Scale: 10K < n < 100K
- License: MIT
Dataset Description
- Domain: Ego‑centric scenes
- Data Generation Process:
- Question‑Answer Generation: For each video, subtitles were concatenated chronologically to form a video description, then GPT‑4 generated 20 questions per video.
- Data Filtering: Questions containing cue words (e.g., "passage", "text", "description") were filtered out, and native‑English graduate students verified question authenticity and the video length needed to answer each question.
Usage
- Data files are uploaded to Files and versions.
- Refer to the paper Encoding and Controlling Global Semantics for Long‑form Video Question Answering and the GitHub code.
Need downstream help?
Pair the dataset with AI analysis and content workflows.
Once the source passes your review, move straight into summarization, transformation, report drafting, or presentation generation with the JuheAI toolchain.