massw
MASSW is a comprehensive text dataset summarizing multiple aspects of scientific workflows. It contains over 152,000 peer‑reviewed publications from 17 leading computer‑science conferences spanning the past 50 years. The dataset defines five core aspects of a scientific workflow: context, key idea, method, outcome, and projected impact, and systematically extracts and structures these aspects from each publication using large language models (LLMs). MASSW is large‑scale and high‑accuracy, verified through extensive checks and comparisons with human annotations and alternative methods. It supports various novel and benchmarkable machine‑learning tasks such as idea generation and outcome prediction, providing a benchmark for evaluating LLM agents in scientific research.
Description
MASSW Dataset Overview
Dataset Information
Configuration massw_data
- Features:
id: stringcontext: stringkey_idea: stringmethod: stringoutcome: stringprojected_impact: string
- Splits:
train:num_bytes: 153,085,133num_examples: 154,275
- Download Size: 86,202,576
- Dataset Size: 153,085,133
Configuration massw_metadata
- Features:
id: stringvenue: stringtitle: stringyear: integerpartition: stringabstract: string
- Splits:
train:num_bytes: 178,427,074num_examples: 191,055
- Download Size: 97,735,018
- Dataset Size: 178,427,074
Dataset Characteristics
- Structured Scientific Workflow: Includes five core aspects – context (background), key idea (main intellectual contribution), method (research approach), outcome (objective results), projected impact (anticipated influence).
- Large‑Scale: Over 152,000 peer‑reviewed papers from 17 top computer‑science conferences spanning more than 50 years.
- High Accuracy: Validated through comprehensive checks and comparison with human annotations and alternative extraction methods.
- Rich Benchmark Tasks: Supports multiple machine‑learning tasks such as idea generation and outcome prediction.
Core Aspect Definitions
| Aspect | Definition | Example |
|---|---|---|
| Context | The surrounding literature or real‑world situation, usually a problem, research question, or unresolved gap. | Making language models bigger does not inherently make them better at following a user's intent, as large models can generate outputs that are untruthful, toxic, or not helpful. |
| Key Idea | The primary intellectual contribution of the paper, often a novel concept or solution compared with the context. | The authors propose InstructGPT, a method to align language models with user intent by fine‑tuning GPT‑3 using a combination of supervised learning with labeler demonstrations and reinforcement learning from human feedback. |
| Method | The concrete research approach used to validate the key idea, which may include experimental setup, theoretical framework, or other validation techniques. | The authors evaluate the performance of InstructGPT by humans on a given prompt distribution and compare it with a much larger model GPT‑3. |
| Outcome | An objective statement of the research output, such as experimental results or other measurable findings. | InstructGPT, even with 100× fewer parameters, is preferred over GPT‑3 in human evaluations. It shows improvements in truthfulness and reductions in toxic outputs with minimal performance regressions on public NLP datasets. |
| Projected Impact | The authors' anticipated influence of the research on the field, and any identified avenues for further improvement or extension. | Fine‑tuning with human feedback is a promising direction for aligning language models with human intent. |
Coverage
- Includes 17 leading computer‑science conferences, such as:
- Artificial Intelligence: AAAI, IJCAI
- Computer Vision: CVPR, ECCV, ICCV
- Machine Learning: ICLR, ICML, NeurIPS, KDD
- Natural Language Processing: ACL, EMNLP, NAACL
- Information Retrieval: SIGIR, WWW
- Databases: SIGMOD, VLDB
- Interdisciplinary: CHI
Citation
@article{zhang2024massw,
title={MASSW: A New Dataset and Benchmark Tasks for AI‑Assisted Scientific Workflows},
author={Zhang, Xingjian and Xie, Yutong and Huang, Jin and Ma, Jinge and Pan, Zhaoying and Liu, Qijia and Xiong, Ziyang and Ergen, Tolga and Shim, Dongsub and Lee, Honglak and others},
journal={arXiv preprint arXiv:2406.06357},
year={2024}
}
AI studio
Generate PPTs instantly with Nano Banana Pro.
Generate PPT NowAccess Dataset
Please login to view download links and access full dataset details.
Topics
Source
Organization: huggingface
Created: 10/22/2024
Power Your Data Analysis with Premium AI Models
Supporting GPT-5, Claude-4, DeepSeek v3, Gemini and more.
Enjoy a free trial and save 20%+ compared to official pricing.