Back to datasets
Dataset assetOpen Source CommunityVideo Question AnsweringFirst-Person Perspective

egoqa

The Ego‑QA‑19k dataset contains 19k video‑question‑answer pairs in first‑person view scenarios. Dataset creation involved two stages: first, video subtitles were concatenated chronologically to generate video descriptions, then GPT‑4 generated 20 questions per video; second, questions containing specific cue words were filtered out, and graduate‑level native English speakers ensured question authenticity and the required video length to answer each question.

Source
huggingface
Created
Oct 5, 2024
Updated
Oct 5, 2024
Signals
287 views
Availability
Linked source ready
Overview

Dataset description and usage context

Ego‑QA‑19k

Overview

Dataset Description

  • Domain: Ego‑centric scenes
  • Data Generation Process:
    1. Question‑Answer Generation: For each video, subtitles were concatenated chronologically to form a video description, then GPT‑4 generated 20 questions per video.
    2. Data Filtering: Questions containing cue words (e.g., "passage", "text", "description") were filtered out, and native‑English graduate students verified question authenticity and the video length needed to answer each question.

Usage

Need downstream help?

Pair the dataset with AI analysis and content workflows.

Once the source passes your review, move straight into summarization, transformation, report drafting, or presentation generation with the JuheAI toolchain.

Explore AI studio