NEFER
We provide the NEFER dataset for neuromorphic event‑driven facial expression recognition. It consists of paired RGB and event videos of human faces, annotated with corresponding emotions and facial bounding boxes and landmarks. The dataset includes RGB and event camera sequences; volunteers view each video, which is labeled with one of the seven universal emotions defined by Paul Ekman. Annotation followed a 1‑annotator, 2‑reviewer, 1‑clinical‑expert verification workflow. The data are intended for training AI models for automatic chest CT segmentation (note: description appears to be mismatched; ensure correct context).
Dataset description and usage context
Dataset Overview
Name: NEFER
Purpose: Neuromorphic event‑driven facial expression recognition
Content: Paired RGB and event videos of human faces, annotated with corresponding emotions and facial bounding boxes and landmarks.
Emotion Labels:
- Disgust
- Contempt
- Happiness
- Fear
- Anger
- Surprise
- Sadness
Additional Label: "None" for instances where volunteers perceived no emotion.
Dataset Structure:
event_raw: Raw event‑camera video foldersevent_frames: Event frames obtained using time‑binary encoding[2]rgb_frames: RGB video frame foldersannotations: Multiple CSV files for training and validation, corresponding to RGB and event data (expected emotions). Each file also has a “subjective” version (reported emotions) provided by users.
Training Set Users: [01, 02, 04, 05, 06, 08, 09, 10, 11, 12, 13, 14, 15, 16,21, 22, 23, 24, 25, 26]
Validation Set Users: [03, 07, 17, 19, 27, 28]
Additional Annotations: Facial landmarks and bounding boxes will be provided.
Download Link: Google Drive
Pair the dataset with AI analysis and content workflows.
Once the source passes your review, move straight into summarization, transformation, report drafting, or presentation generation with the JuheAI toolchain.