Back to datasets
Dataset assetOpen Source CommunityCombinatorial OptimizationPredict-then-Optimize

PredictiveCO Benchmark

Predictive combinatorial optimization benchmark dataset, used to evaluate predictive combinatorial optimization (PtO and PnO) methods under uncertain coefficients, covering real‑world scenarios such as energy‑cost‑aware scheduling and advertising budget allocation.

Source
github
Created
Oct 19, 2024
Updated
Oct 24, 2024
Signals
140 views
Availability
Linked source ready
Overview

Dataset description and usage context

PredictiveCO Benchmark

Overview

PredictiveCO Benchmark is a benchmark testing framework for Predictive Combinatorial Optimization (Predictive‑CO). The framework aims to evaluate methods under two design principles: “Predict‑then‑Optimize (PtO)” and “Predict‑and‑Optimize (PnO)”.

Datasets

  • Problem Types: Covers 8 problems, including a new industrial dataset for combinatorial advertising.
  • Methods: Benchmark includes 11 existing PtO/PnO methods.

Modular Framework

  • User Customization: Users can plug in their own problems, predictors, solvers, loss functions, and evaluation methods.

Usage

  • Installation: Install locally with:

    pip install -e .

  • Running Algorithms: Refer to the shell scripts in the “shells/benchmarks” directory.

Citation

bibtex @inproceedings{geng2024predictive, title={Benchmarking PtO and PnO Methods in the Predictive Combinatorial Optimization Regime}, author={Geng, Haoyu and Ruan, Hang and Wang, Runzhong and Li, Yang and Wang, Yang and Chen, Lei and Yan, Junchi}, booktitle={NeurIPS 2024 Datasets and Benchmarks Track}, year={2024} }

Terms of Use

By using this benchmark dataset, you agree to the terms of use specified in Appendix C.

Need downstream help?

Pair the dataset with AI analysis and content workflows.

Once the source passes your review, move straight into summarization, transformation, report drafting, or presentation generation with the JuheAI toolchain.

Explore AI studio