mixedinteger

MIPcc26: The 2026 Land-Doig MIP Competition

About

The computational development of optimization tools is a key component within the MIP community and has proven to be a challenging task. It requires great knowledge of the well-established methods, technical implementation abilities, as well as creativity to push the boundaries with novel solutions. In 2022, the annual Mixed Integer Programming Workshop established a computational competition in order to encourage and provide recognition to the development of novel practical techniques within MIP technology. It was renamed in 2025 to honor Ailsa H. Land and Alison G. Harcourt (née Doig), with permission, who proposed the first LP-based branch-and-bound algorithm, a fundamental component of every modern MIP solver.

This year, the computational competition will focus on primal heuristics for general MI(L)P that can be enhanced by GPU acceleration.

The 2026 Land-Doig MIP competition is supported by NVIDIA, who is providing NVIDIA GPU cloud credits for the development of submissions and as prizes.

NVIDIA logo

The Challenge: GPU-Accelerated Primal Heuristics for MIP

The optimization community is seeing increasing momentum in the development of effective GPU-based optimization methods and in the ecosystem of libraries that facilitate research and implementation of GPU-based algorithms. For example, in linear programming, practical first-order LP methods such as PDLP/PDHG can efficiently run on a GPU, enabling speed-ups over traditional LP methods for a number of large-scale use cases. These algorithms are leveraged in emerging GPU-accelerated MIP solvers, such as cuOpt. In addition, we are increasingly seeing the use of GPU-accelerated differentiable optimization as subroutines in discrete optimization heuristics, made accessible by machine learning libraries such as PyTorch and JAX. These solvers and libraries are often open source, facilitating researchers to build on top of.

This competition is designed to boost research towards high-quality GPU-accelerated MIP solvers that push boundaries on large-scale problems that current CPU-based MIP solvers might struggle with. Modern MIP solvers contain a portfolio of various techniques such as branch-and-bound, cutting planes, presolve, heuristics, and so on, and we hope to see fresh new ideas and techniques that are inherently designed to leverage GPU acceleration. To provide a focal point, this competition centers around primal heuristics for general mixed-integer linear programming, as they are typically simpler to develop than exact methods and particularly useful for practitioners. In other words, we focus on algorithms that quickly find high-quality feasible solutions for mixed-integer linear programs using GPUs.

We especially welcome submissions from those who have never used a GPU before, particularly students, as well as those proficient in GPUs who have not done much optimization before. Our goal is not only to foster the development of algorithms, but also of the expertise in the community. To encourage this:

  1. We will distribute NVIDIA GPU cloud credits to registered participants specifically to be used for this competition, provided by NVIDIA (see “Compute Infrastructure” section).
  2. We provide learning materials to ease the learning curve for participants to implement GPU-based heuristics for MIP (see “Learning Materials” section).

The task is to provide:

To encourage the development of fundamentally novel techniques, this competition will have a special rule that MIP solvers (or variants) must not be used as a subroutine. Please see complete details in the Technical Rules section.

The jury will select one winner and up to two honorable mentions to present their work at the MIP Workshop 2026. The winning team will present in a regular session, whereas honorable mentions will present posters.

This year, NVIDIA will be providing a total prize pool of $1500 in GPU cloud credits:

Furthermore, we will provide the following:

Timeline

Organizing Committee

Learning Material

Tutorial

The committee is currently working on a customized tutorial, which will follow through a simple example of a GPU-accelerated heuristic for MIP. We plan on releasing it by November. Please check back!

External sources

Competition Rules

Rules for Participation

Technical Rules

In case participants have any questions about the implementation of specific rules, they should not hesitate to contact the organizers.

Input/Output

The code must read the problem in gzipped MPS format, and write two types of files:

  1. a set of solution files, each logging the best found solution so far, using the MIPLIB solution format (described below). Each file should contain a solution and be named “solution_i.sol”, where “i” is the order that the solutions were found starting at 1 (solution_1.sol, solution_2.sol, etc.).
  2. a single timing file, containing a list of times when each solution was found, named “timing.log”.

The solution files must be generated at roughly the same time as when the solution is reported in the result file. The files must be formatted as follows:

  1. Each solution file follows the MIPLIB format, which is the following:
    • The first line must contain the string “=obj=”, followed by the objective value of the solution. These should be separated by whitespace (any amount of spaces or tabs).
    • The following lines must contain one variable per line with the variable name, followed by the solution value of the variable. These should be separated by whitespace (any amount of spaces or tabs).
    • Important: Please ensure that the objective and solution values are output with full floating point precision and not truncated. If you do not do so, solutions might end up not within tolerances and not counted. If the objective value is tracked in some form that could lead to numerical imprecision (e.g. iteratively), we recommend that you recompute the objective value from scratch when writing the solution file.

      Example solution file: Please refer to any solution file in the MIPLIB website (example).

  2. The timing file contains the elapsed wall time at which each solution was found.
    • Each line must contain the filename of the solution (e.g. “solution_1.sol”), followed by the wall time when that solution was found. These should be separated by whitespace (any amount of spaces or tabs).
    • The format for timing is in seconds, 3 decimal places, number only (no “s”).
    • Submissions may exclude the time it takes to read the MPS file for the time limit. To do so, please track the time it takes to read the MPS file. Then, when writing the elapsed wall times, subtract that value. For our own checking, please also report the loading time by including as the first line in the timing file the string “input”, followed by the time it took to read the MPS file, in the same format as above. If that line is not present, we will assume that you have opted to not subtract any time.
    • Solutions with reported times that are higher than the time limit plus a tolerance of 1s (i.e. 301s) will be ignored. In other words, all reported solution times in the timing file must be at most this limit. We recommend that submissions include that check in their code.

      Example timing file:

      input   0.129
      solution_1.sol   10.248
      solution_2.sol   69.831
      solution_3.sol   173.591
      solution_4.sol   300.020
      
    • Note that the printed solution time values must already have had the input time subtracted, e.g. in the example above, solution_1 was actually found at 10.248 + 0.129 = 10.377s.

It is recommended that submissions use a separate thread to write solution files. The reported time should be the time right before writing the solution, and it is ok if the execution runs longer to finish writing, as long as the last solution is found before the time limit.

Solution Checker

During evaluation, we plan on using MIPLIB’s official solution checker, which can be found inside the scripts package miplib2017-testscript-v1.0.4.zip in the MIPLIB Downloads page. For convenience, we repackage only the checker here. Importantly, the checker must be run with the tolerances above, as its default tolerances are looser: ./solchecker <model.mps.gz> <solution.sol> 1e-6 1e-5. We strongly recommend participants to run the checker themselves over all solutions produced by the submission. Solutions that fail the checker will not be considered valid.

Problem Instances

The test set of problem instances can be found here. All files are in gzipped MPS format.

To encourage generality, the 50 instances in the test set were selected to be diverse, containing 18 different problem classes from typical MIP applications. In addition, the instance set contains several difficult or large-scale instances to align with the goal of pushing boundaries on what can be solved, especially given that most of the recent potential in GPU algorithms for LP/MIP come from large-scale applications.

This year, our set of problem instances will be already presolved by Gurobi. The intent with having a presolved set is to put presolve out of the way so that submissions can focus on the heuristic itself. However, we recognize that certain approaches may work best if they can detect special problem structure, which may be lost during presolve. Therefore, we provide the original set here, and offer the option for participants to run on the original set if they wish (this will be asked at submission).

For evaluation, we will run the code on a hidden set of problem instances as well. The hidden set will be slightly harder than the test set. Approximately half of the hidden set will be instances from problem classes present in the test set, and the other half will be unseen problem classes.

For reference, we provide objective values after running Gurobi with a time limit of 5 minutes. These are informative only.

Evaluation Criteria

The spirit of this competition is to encourage the research and implementation of novel GPU-based algorithms for MIP, and as such, the jury will evaluate submissions by two criteria:

  1. By performance, based on a combination of primal integral and final objective value of the heuristic.
  2. By innovation, which will be an evaluation by jury of the method and implementation.

Given subtleties with GPU algorithms, the jury reserves the right to adjust the evaluation criteria if we run into unforeseen scenarios, but the jury will follow the spirit of the above criteria.

Performance criterion

To compute the performance of the heuristic, we will run the code on both the public and hidden instances, and rank the primal integral and final objective value of the submissions. We expect high-quality submissions to do well in both metrics, not just one or the other.

Innovation criterion

Given that this competition is designed to encourage innovation in effective GPU algorithms for MIP, the method and implementation itself will be as important as its performance. The jury will review the report for both methodological and engineering innovations. For example, a submission with strong methodological innovation might propose a novel algorithm designed for GPUs from the ground up that may be different from what is typically seen in the MIP literature. A submission with strong engineering innovation might have clever implementations of GPU kernels or data structures that best utilize the GPU. That said, we advise participants to not be constrained by the two categories above. Either a strong implementation of an existing method or a straightforward implementation of a novel method would be excellent candidates.

Please be aware that writing quality impacts this criterion as well. We do not expect the report to be at the level of a published paper, but treat it as if you are writing one. Importantly, please be clear on what is novel in your method and add relevant citations for the parts of your method that are based on existing work.

Additionally, certain heuristic frameworks might naturally produce dual bounds, which are useful for an exact MIP solver. At the jury’s discretion, we may consider a bonus if a method is able to produce good dual bounds as a byproduct. Participants are responsible for showing some computational evidence that these can be reasonably better than LP bounds, but it does not need to be comprehensive. We will not rank these bounds computationally and they will only be considered for this criterion.

Compute Infrastructure and GPU Credits

The competition will provide compute infrastructure to participants via NVIDIA Brev, sponsored by NVIDIA. NVIDIA Brev provides access to various cloud services in an optimized way that reduces the setup and management process for GPU infrastructures.

Each team will be provided with cloud credits that they can use for GPU development, debugging, and benchmarking. The credits will be allocated to each team’s Brev org and team members can consume the same credit pool. Credits will be provided on a rolling basis after teams register, typically within a few weeks of registration. Credits are not guaranteed given that our pool is finite: depending on the number of registrants, we may prioritize teams mainly composed of students, which can be indicated in the registration form.

Recommendations on credits and the development environment

Since the credit is limited for each team, we recommend teams use compute frugally by using the compute instances during active development and debugging. Two types of GPUs will be available in this development environment, L40 and H100. We recommend using L40 for development as it consumes fewer credits, and H100 for benchmarking as the jury evaluation will be done on H100 SXM5.

Some cloud providers provide persistent storage and env, some do not. Providers with persistent storage have higher per hour cost and a fixed per hour storage cost even if the instance is offline. We recommend using persistent instances for ease of use, but managing cloud credits is the responsibility of the participants. Please be careful to stop instances if not used, otherwise the credits will be consumed.

Brev is flexible where one can have setup scripts or start with custom containers. We recommend using the launchable workflow with a suggested development launchable which contains python3, CUDA drivers and toolkit, sanitizer and debugging tools, jupyter and basic development tools. Brev provides an intuitive and easy way to access the development environment with a command line command. Developers can also directly launch a vscode instance which is automatically connected to the remote GPU instance. The documentation can be found here.

Submission Requirements

Registration

All participants must register with the full list of team members by filling out this registration form by December 12, 2025. Submissions are composed of a report and code.

Report

All participants must submit a written report of 10 pages maximum plus references, in Springer LNCS format. The report and the code must be submitted together by March 20, 2026 (AoE).

The report must include the following information:

Code

Participants are responsible for setting up the Brev environment that we provide to be able to run the heuristic. In addition, participants must produce a shell script that will run the code, named run_heuristic.sh that takes in two arguments:

  1. The first argument is the path in the filesystem to the instance to read, in gzipped MPS format.
  2. The second argument is the path in the filesystem where the method should write the results (files for solutions found and a result file).