Explore high-quality datasets for your AI and machine learning projects.
We provide the NEFER dataset for neuromorphic event‑driven facial expression recognition. It consists of paired RGB and event videos of human faces, annotated with corresponding emotions and facial bounding boxes and landmarks. The dataset includes RGB and event camera sequences; volunteers view each video, which is labeled with one of the seven universal emotions defined by Paul Ekman. Annotation followed a 1‑annotator, 2‑reviewer, 1‑clinical‑expert verification workflow. The data are intended for training AI models for automatic chest CT segmentation (note: description appears to be mismatched; ensure correct context).
The DAiSEE dataset is used for facial expression recognition and contains 300 frames extracted from each video. Each frame includes 68 facial landmark points and 52 action units.
The dataset contains facial expression recordings of 500 drivers across seven expression categories, covering various ages, time periods, and expressions. Data were captured using visible‑light and infrared binocular cameras at a resolution of 1920×1080. The dataset can be used for driver facial expression recognition and related tasks.