MIP 2025 Logo

Mixed Integer Programming Workshop 2025

June 2, 2025 (summer school)

June 3-6, 2025 (regular workshop)

University of Minnesota

About

The Mixed Integer Programming (MIP) Workshop is a single-track workshop highlighting the latest trends in integer programming and discrete optimization, with speakers chosen by invitation.

The 2025 edition of the workshop will be the twenty-second in the MIP series. Links to past editions can be found here. MIP 2025 is supported by the Mixed Integer Programming Society (MIPS), which is a section of the Mathematical Optimization Society (MOS).

The MIP 2025 organizing committee and the local organizing committee are committed to making the workshop a welcoming environment for all members of the MIP community. MIP 2025 will be an entirely in-person event. We look forward to seeing the community in Minneapolis, where MIP returns after twenty years (see MIP 2005 website and group photo).

Key Dates and Updates

Speakers

For talk abstracts, see the schedule or click the title for each speaker. Slides are also posted for speakers that chose to make them publicly available.

  • Tobias Achterberg (Gurobi), From Infeasibility to Feasibility - Improvement Heuristics to Find First Feasibles for MIPs [Slides]

    Abstract: Relaxation Induced Neighborhood Search (RINS) and other large neighborhood search (LNS) improvement heuristics for mixed integer programs (MIPs) explore some neighborhood around a given feasible solution to find other solutions with better objective value. This often leads to a chain of improving solutions with a high quality solution at its end, even if the starting solution is rather poor.

    RINS and its variants are the most important heuristic ingredients in Gurobi to find good solutions quickly. But they have one issue: they can only be employed after an initial feasible solution has been found.

    This initial feasible solution is usually found by other heuristics, so-called "start heuristics", like rounding of LP solutions, fix-and-dive, or the Feasibility Pump.

    In this talk, we discuss a different approach, which works surprisingly well: similar in spirit to the Feasibility Pump, consider infeasible integral vectors as input to the improvement heuristics and search in the neighborhood for vectors with small violation to act as new starting point for the next LNS improvement heuristic invocation.

  • Claudia D'Ambrosio (École Polytechnique), Approximating complex functions in mixed integer non-linear optimization: a hybrid data- and knowledge-driven approach [Slides]

    Abstract: The integration of data-driven and knowledge-driven approaches has recently caught the attention of the mathematical optimization community. For example, in Bertsimas and Ozturk (2023) and Bertsimas and Margaritis (2025), data-driven approaches are used to derive a more tractable approximation of some non-linear functions appearing in the constraints of mixed integer non-linear optimization (MINLO) problems. Their approaches involves: i. sampling the non-linear functions; ii. using machine learning (ML) approaches to obtain a good, linear approximation of these functions; iii. integrating it into a mixed integer linear optimization (MILO) models to solve MINLO faster.

    We take a similar viewpoint. However, instead of a ML-based function approximation, we apply a statistical learning (SL) approach and fit B-splines. The main advantage of this alternative is that the fitting can be formulated as a mathematical optimization problem. Consequently, prior knowledge about the non-linear function can be directly integrated as constraints (e.g., conditions on the sign, monotonicity, curvature). The resulting B-spline can be modeled as a piecewise polynomial function, which allows the integration in the MINLO model. We test our approach on some instances of challenging instances from the MINLPlib and on a real-world application, namely the hydro unit commitment problem. The results show that, with the proposed method, we can find a good balance between quality and tractability of the approximation.

  • Christina Büsing (RWTH Aachen University), Solving Robust Binary Optimization Problem with Budget Uncertainty [Slides]

    Abstract: Robust optimization with budgeted uncertainty, as proposed by Bertsimas and Sim in the early 2000s, is one of the most popular approaches for integrating uncertainty in optimization problems. The existence of a compact reformulation for MILPs and positive complexity results give the impression that these problems are relatively easy to solve. However, the practical performance of the reformulation is actually quite poor due to its weak linear relaxation.

    To overcome this weakness, we propose a bilinear formulation for robust binary programming, which is as strong as theoretically possible. From this bilinear formulation, we derive strong linear formulations as well as structural properties, which we use within a tailored branch and bound algorithm. Furthermore, we propose a procedure to derive new classes of valid inequalities for robust binary optimization problems. For this, we recycle valid inequalities of the underlying non-robust problem such that the additional variables from the robust formulation are incorporated. The valid inequalities to be recycled may either be readily available model-constraints or actual cutting planes, where we can benefit from decades of research on valid inequalities for classical optimization problems.

    We show in an extensive computational study that our algorithm and also the use of recycled inequalities outperforms existing approaches from the literature by far. Furthermore, we show that the fundamental structural properties proven in this paper can be used to substantially improve approaches from the literature. This highlights the relevance of our findings, not only for the tested algorithms, but also for future research on robust optimization.

  • Esra Buyuktahtakin Toy (Virginia Tech), Scenario Dominance Cuts for Risk-Averse Multi-Stage Stochastic Mixed-Integer Programs [Slides]

    Abstract: We introduce a novel methodology, termed "Stage-t Scenario Dominance," for addressing risk-averse multi-stage stochastic mixed-integer programs (M-SMIPs). We define the scenario dominance concept and utilize the partial ordering of scenarios to derive bounds and cutting planes, which are generated based on implications drawn from solving individual scenario sub-problems up to stage t. This approach facilitates the generation of new cutting planes, significantly enhancing our capacity to manage the computational challenges inherent in risk-averse M-SMIPs. We demonstrate the potential of this method on a stochastic formulation of the mean-Conditional Value-at-Risk (CVaR) dynamic knapsack problem. Our computational findings demonstrate that the "scenario dominance "- based cutting planes significantly reduce solution times for complex mean-risk, stochastic, multi-stage, and multi-dimensional knapsack problems, achieving reductions in computational effort by one to two orders of magnitude.

  • Emma S. Johnson (Sandia National Laboratories), Negotiating with solvers: Using Generalized Disjunctive Programming to find Computationally Performant MIP Formulations [Slides]

    Abstract: We currently live in a world of powerful (but magical) MIP solvers that implement a seemingly endless supply of tricks and techniques (including presolvers, branching strategies, symmetry detection, cuts, heuristics, etc.). The impact of these capabilities is so significant that the practitioner is no longer best served by searching for the best theoretical formation, and instead finds herself searching over the space of formulations, trying to find a form that minimizes solve time (sometimes without even branching!). From a practical perspective, solvers are black boxes that run on magic pixie dust. Worse still, the “best formulation” is a moving target: the performance of formulations (both in absolute and relative terms) can and does change dramatically from version to version of the solver.

    In this talk, I will present a systematic approach for generating and exploring alternative MIP formulations based on Generalized Disjunctive Programming. By combining an approach for representing the logic from a standard MIP with standardized (and automated) routines for reformulating the logical model into a MIP, we can efficiently and expediently explore the space of formulations and empirically identify formulations that are most effective for our problem. I will present preliminary computational results for several canonical problems using several versions of the Gurobi solver to argue that disjunctive programming serves us well by generalizing MIP and thus staying upwind of numerous formulation decisions that can have dramatic effects on solver performance.

  • Joris Kinable (Amazon), Learning Time-Dependent Travel Times

    Abstract: Travel time estimates are essential for a wide variety of transportation and logistics applications, including route planning, supply chain optimization, traffic management, and public transportation scheduling. Despite the large number of applications, accurately predicting travel times remains challenging due to the variability in (urban) traffic patterns and external factors such as weather, road works, and accidents. Travel times are generally stochastic and time-dependent, and estimates must be network-consistent, accounting for correlations in travel speeds across different parts of the road network.

    In this talk, we will review several existing machine learning (ML) and operations research (OR) models for predicting travel times. Additionally, we will introduce a novel ML-based method capable of accurately estimating dynamic travel times within intervals of just a few minutes, using only historical trip data that includes origin and destination locations and departure times, without requiring actual itineraries. We formulate the travel time estimation problem as an empirical risk minimization problem, utilizing a loss function that minimizes the expected difference between predicted and observed travel times. To solve this problem, we develop two supervised learning approaches that follow an iterative procedure, alternating between a path-guessing phase and a parameter-updating phase. The approaches differ in how parameter updates are performed, and therefore have different performance characteristics.

    We conduct extensive computational experiments to demonstrate the effectiveness of our proposed methods in reconstructing traffic patterns and generating precise travel time estimates. Experiments on real-world data shows that our procedures scale well to large street networks and outperform current state-of-the-art methods.

  • Martine Labbé (Université libre de Bruxelles), Solving Chance-Constrained (mixed integer) Linear Optimization Problems with Branch-and Cut [Slides]

    Abstract: Consider an optimization problem in which some constraints involve random coefficients with known probability distribution. A chance-constraint version of this problem amounts to impose that these constraints must be satisfied with a probability larger than or equal to a given threshold.

    Chance-constraint optimization problems (CCOPs) are frequently used to model problems in the domain of energy. They are known to be NP-hard. In the case where objective and constraints are linear, the problem can be reformulated as a mixed-integer linear problem by introducing big-M constants.

    In this talk, we propose a Branch-and-Cut algorithm for solving linear CCOP. We determine new valid inequalities and compare them to some existing in the literature. Moreover, we state and prove results on the closure of these valid inequalities. Computational experiments validated the quality of these new inequalities.

    This is a joint work with Diego Cattaruzza, Matteo Petris, Marius Roland and Martin Schmidt.

  • Amélie Lambert (Conservatoire National des Arts et Métiers), Quadratization-based methods for solving unconstrained polynomial optimization problems [Slides]

    Abstract: We consider the problem of minimizing a polynomial with mixed-binary variables. We present a three-phase approach for solving it to global optimality. In the first phase, we design quadratization schemes for the polynomial by recursively decomposing each monomial into pairs of sub-monomials, down to the initial variables. Then, starting from given quadratization schemes, the second phase consists in constructing in an extended domain a quadratic problem equivalent to the initial one. The resulting quadratic problem is generally non-convex and remains difficult to solve. The last phase consists in computing a tight convex quadratic relaxation that can be used within a branch-and-bound algorithm to solve the problem to global optimality. Finally, we present first experimental results. Based on joint works with Sourour Elloumi, Arnaud Lazare, Daniel Porumbel.

  • Haihao Lu (Massachusetts Institute of Technology), GPU-Based Linear Programming and Beyond [Slides]

    Abstract: In this talk, I will talk about the recent trend of research on new first-order methods for scaling up and speeding up linear programming (LP) and convex quadratic programming (QP) with GPUs. The state-of-the-art solvers for LP and QP are mature and reliable at delivering accurate solutions. However, these methods are not suitable for modern computational resources, particularly GPUs. The computational bottleneck of these methods is matrix factorization, which usually requires significant memory usage and is highly challenging to take advantage of the massive parallelization of GPUs. In contrast, first-order methods (FOMs) only require matrix-vector multiplications, which work well on GPUs and have already accelerated machine learning training during the last 15 years. This ongoing line of research aims to scale up and speed up LP and QP by using FOMs and GPUs.

  • Jim Luedtke (University of Wisconsin-Madison), Optimally Delaying Attacker Projects Under Resource Constraints [Slides]

    Abstract: We consider the problem of selecting and scheduling mitigations to delay a set of attacker projects. Prior research has considered a problem in which a defender takes actions that delay steps in an attacker project in order to maximize the attacker project duration, given a budget constraint on their actions. We consider an extension of this model in which carrying out the mitigations requires competing limited resources, and hence the selected mitigations must be scheduled over time. At the same time, the attackers may be progressing along their project, and so the timing of completed mitigations determines whether or not they are successful in delaying tasks of the attacker projects. We propose an integrated integer programming model of this bilevel problem by using a time-indexed network that enables keeping track of attackers' task durations over time as determined by the defender's mitigation schedule. We introduce an information-based relaxation and use this to derive an alternative formulation that is stronger and more compact. We find that our reformulation greatly improves the solvability of the model. In addition, we see that considering the interaction of defender mitigation and attacker project schedules leads to 6-10% improvement in the objective over models that ignore this interaction.

    This is joint work with Ashley Peper and Laura Albert from UW-Madison.

  • Victor Reis (Microsoft Research), The Subspace Flatness Conjecture and Faster Integer Programming [Slides]

    Abstract: In a seminal paper, Kannan and Lovász (1988) considered a quantity $\alpha(\mathcal{L},K)$ which denotes the best volume-based lower bound on the covering radius $\mu(\mathcal{L}, K)$ of a convex body $K$ with respect to a lattice $\mathcal{L}$. Kannan and Lovász proved that $\mu(\mathcal{L}, K) \le n \cdot \alpha(\mathcal{L},K)$ and the Subspace Flatness Conjecture by Dadush (2012) claims a $O(\log n)$ factor suffices, which would match a lower bound from the work of Kannan and Lovász.

    We settle this conjecture up to a constant in the exponent by proving that $\mu(\mathcal{L}, K) \le O(\log^3 n)\cdot \alpha(\mathcal{L},K)$. Our proof is based on the Reverse Minkowski Theorem due to Regev and Stephens-Davidowitz (2017). Following the work of Dadush (2012, 2019), we obtain a $(\log n)^{O(n)}$-time randomized algorithm to solve integer programs in $n$ variables. Another implication of our main result is a near-optimal $\textit{flatness constant}$ of $O(n\log^3 n)$.

  • Ward Romeijnders (University of Groningen), Convex approximations for multistage stochastic mixed-integer programs [Slides]

    Abstract: We consider multistage stochastic mixed-integer programs. These problems are extremely challenging to solve since the expected cost-to-go functions in these problems are typically non-convex due to the integer decision variables involved. This means that efficient decomposition methods using tools from convex approximations cannot be applied to this problem. For this reason, we construct convex approximations for the expected cost to-go functions and we derive error bounds for these approximations that converge to zero when the total variation of the probability density functions of the random parameters in the model converge to zero. In other words, the convex approximations perform well if the variability of the random parameters in the problem is large enough. To derive these results, we analyze the mixed-integer value functions in the multistage problem, and show that any MILP with convex and Lipschitz continuous objective exhibits asymptotic periodic behavior. Combining these results with total variation bounds on the expectation of periodic functions yields the desired bounds.

  • Domenico Salvagnin (University of Padova), MIP formulations for delete-free AI planning [Slides]

    Abstract: Delete-free relaxations are a fundamental concept in classical AI planning, and the basis of state of the art admissible heuristics for domain-independent A* search, like LM-cut. While considered in general too expensive to be computed exactly, there was a renewed interest in understanding how effectively those can be computed in practice for standard planning tasks appearing in IPC competitions. In particular, several MIP formulations have been proposed in the last decade. In this work we reevaluate current formulations, and propose new approaches that incrementally improve over current methods by using standard MIP techniques, like MIP starts and lazy constraint separation.

  • Hamidreza Validi (Texas Tech University), Polytime Procedures for Fixing, Elimination, and Conflict Inequalities

    Abstract: Reducing the number of decision variables can improve both the theoretical and computational aspects of mixed integer programs. The decrease in decision variables typically occurs in two ways: (i) variable fixing and (ii) variable elimination. We propose polytime procedures for identifying fixing and elimination opportunities in binary integer programs using conflict graphs. Our fixing and elimination procedures are built upon the identification of a specific type of path, referred to as hopscotch paths, in conflict graphs. Furthermore, we develop a polytime procedure for adding conflict edges to the conflict graphs of binary integer programs. We will discuss how adding these edges to the conflict graph affects our proposed fixing and elimination procedures. Finally, we conduct computational experiments on a set of MIPLIB instances and compare our computational performance with that of Atamtürk et al. (European Journal of Operational Research, 2000).

  • Alinson S. Xavier (Argonne National Laboratory), MIPLearn: An Extensible Framework for Learning-Enhanced Optimization

    Abstract: In many practical scenarios, discrete optimization problems are solved repeatedly, often on a daily basis or even more frequently, with only slight variations in input data. Examples include the Unit Commitment Problem, solved multiple times daily for energy production scheduling, and the Vehicle Routing Problem, solved daily to construct optimal routes. In this talk, we introduce MIPLearn, an extensible open-source framework which uses machine learning (ML) to enhance the performance of state-of-the-art MIP solvers in these situations. Based on collected statistical data, MIPLearn predicts good initial feasible solution, redundant constraints in the formulation, and other information that may help the solver to process new instances faster. The framework is compatible with multiple MIP solvers (e.g. Gurobi, CPLEX, SCIP, HiGHS), multiple modeling languages (JuMP, Pyomo, gurobipy) and supports user-provided ML models.

  • Luze Xu (UC Davis), Gaining or losing perspective for convex multivariate functions [Slides]

    Abstract: Mixed-integer nonlinear optimization formulations of the disjunction between the origin and a polytope via a binary indicator variable is broadly used in nonlinear combinatorial optimization for modeling a fixed cost associated with carrying out a group of activities and a convex cost function associated with the levels of the activities. The perspective relaxation of such models is often used to solve to global optimality in a branch-and-bound context, but it typically requires suitable conic solvers and is not compatible with general-purpose NLP software in the presence of other classes of constraints. This motivates the investigation of when simpler but weaker relaxations may be adequate. Comparing the volume (i.e., Lebesgue measure) of the relaxations as a measure of tightness, we lift some of the results related to univariate functions to the multivariate case.

  • Kim Yu (Université de Montréal), On L-Natural-Convex Minimization and Its Mixed-Integer Extension

    Abstract: L-natural-convex functions are a class of nonlinear functions defined over integral domains. Such functions are not necessarily convex, but they display a discrete analogue of convexity. In this work, we explore the polyhedral structure of the epigraph of any L-natural-convex function and provide a class of valid inequalities. We show that these inequalities are sufficient to describe the epigraph convex hull completely, and we give an exact separation algorithm. We further examine a mixed-integer extension of this class of minimization problems and propose strong valid inequalities. We establish the connection between our results and the valid inequalities for some structured mixed-integer sets in the literature.

  • Xian Yu (The Ohio State University), Residuals-Based Contextual Distributionally Robust Optimization with Decision-Dependent Uncertainty

    Abstract: We consider a residuals-based distributionally robust optimization model, where the underlying uncertainty depends on both covariate information and our decisions. We adopt regression models to learn the latent decision dependency and construct a nominal distribution (thereby ambiguity sets) around the learned model using empirical residuals from the regressions. Ambiguity sets can be formed via the Wasserstein distance, a sample robust approach, or with the same support as the nominal empirical distribution (e.g., phi-divergences), where both the nominal distribution and the radii of the ambiguity sets could be decision- and covariate-dependent. We provide conditions under which desired statistical properties, such as asymptotic optimality, rates of convergence, and finite sample guarantees, are satisfied. Via cross-validation, we devise data-driven approaches to find the best radii for different ambiguity sets, which can be decision-(in)dependent and covariate-(in)dependent. Through numerical experiments, we illustrate the effectiveness of our approach and the benefits of integrating decision dependency into a residuals-based DRO framework.

  • Yuan Zhou (University of Kentucky), Measuring Solid Angles for Polyhedral Cones in Arbitrary Dimension

    Abstract: Polyhedral cones are of interest in many mathematical fields, such as geometry and optimization. A natural and fundamental question is: how large is a given cone? Since a cone is unbounded, we consider its solid angle measure—the proportion of space the cone occupies. Solid angles generalize plane angles to higher dimensions, but beyond dimension three, no closed-form expressions are known. This makes accurate and efficient approximation methods essential.

    To that end, we explore existing methods and introduce a new approach—based on polyhedral decomposition and multivariable hypergeometric series—for approximating solid angles. We analyze the asymptotic error of the series and develop a dynamic truncation scheme that balances computational cost and accuracy. We implement our method in SageMath and apply it to a concrete problem of evaluating facet importance. We present computational results, compare our approach with existing techniques, and discuss its practical implications.

Contributed Talks

MIP 2025 invited submissions of contributed “flash” talks by “nonstudents” (those who were not eligible for the poster session, such as postdocs, faculty, national lab members, and industry researchers). The deadline to submit a title/abstract was April 7, 2025. The presented talks are listed below.

  • Daniele Catanzaro (Université Catholique de Louvain, Belgium), Optimizing over Path-Length Matrices of Unrooted Binary Trees

    Abstract: Path-Length Matrices (PLMs) form a tree encoding scheme that is often used in the context of optimization problems defined over Unrooted Binary Trees (UBTs). Determining the conditions that a symmetric integer matrix of order n ≥ 3 must satisfy to encode the PLM of a UBT with n leaves is central to these applications. Here, we show that a certain subset of known necessary conditions is also sufficient to characterize the set Θn of PLMs induced by the set of UBTs with n leaves. We also identify key polyhedral results as well as facet-defining and valid inequalities for the convex hull of these matrices. We then examine how these results can be used to develop a new Branch-&-Cut algorithm for the Balanced Minimum Evolution Problem (BMEP), a NP-hard nonlinear optimization problem over Θn much studied in the literature on molecular phylogenetics. We present a new valid integer linear programming formulation for this problem and we show how to refine it, by removing redundant variables and constraints. We also show how to strengthen the reduced formulation by including lifted versions of the previously identified facet-defining and valid inequalities. We provide separation oracles for these inequalities and we exploit the tight lower bound provided by the linear programming relaxation of this formulation to design a new primal heuristic for the BMEP. The results of extensive computational experiments show that the Branch-&-Cut algorithm derived from this study outperforms the current state-of-the-art exact solution algorithm for the problem.

  • Akif Çördük (Nvidia), GPU-Accelerated Evolutionary Framework for Primal Heuristics

    Abstract: We present a GPU-accelerated heuristic framework utilizing parallelized algorithms that power Nvidia cuOpt. In this work, we propose novel evolutionary algorithms and integrate them into an evolutionary framework aimed at improving an initial population of solutions. Offspring solutions are generated through recombination operators and further refined using local search techniques. These solutions are maintained within a population pool that ensures controlled diversity and incorporates Lagrangian local search weights.

    Additionally, we introduce a fast bounds propagation rounding procedure that leverages informed bulk rounding and repair strategies. We also briefly discuss why certain algorithms are well-suited for GPU parallelization, including feasibility jump, feasibility pump, PDLP, bounds propagation, recombination heuristics, and local search heuristics.

    Finally, we present our results by demonstrating the improvement of the initial population and comparing our approach with other primal heuristics.

    This is a joint work with Piotr Sielski.

  • Yongchun Li (University of Tennessee Knoxville), The Augmented Factorization Bound for Maximum-Entropy Sampling [Slides]

    Abstract: The maximum-entropy sampling problem (MESP) aims to select the most informative principal submatrix of a prespecified size from a given covariance matrix. This paper proposes an augmented factorization bound for MESP based on concave relaxation. By leveraging majorization and Schur-concavity theory, we demonstrate that this new bound dominates the classic factorization bound of Nikolov (2015) and a recent upper bound proposed by Li et al. (2024). Furthermore, we provide the theoretical guarantees that quantify how much our proposed bound improves the two existing ones and establish sufficient conditions for when the improvement is strictly attained. These results allow us to refine the celebrated approximation bounds for the two approximation algorithms of MESP. Besides, motivated by the strength of this new bound, we develop a variable fixing logic for MESP from a primal perspective. Finally, our numerical experiments demonstrate that our proposed bound achieves smaller integrality gaps and fixes more variables than the tightest bounds in the MESP literature on most benchmark instances, with the improvement being particularly significant when the condition number of the covariance matrix is small.

  • Gonzalo Muñoz (Universidad de Chile), Tightening convex relaxations of trained neural networks: a unified approach for convex and S-shaped activations [Slides]

    Abstract: The non-convex nature of trained neural networks has created significant obstacles in their incorporation into optimization models. Considering the wide array of applications that this embedding has, the optimization and deep learning communities have dedicated significant efforts to the convexification of trained neural networks. Many approaches to date have considered obtaining convex relaxations for each non-linear activation in isolation, which poses limitations in the tightness of the relaxations. Anderson et al. (2020) strengthened these relaxations and provided a framework to obtain the convex hull of the graph of a piecewise linear convex activation composed with an affine function; this effectively convexifies activations such as the ReLU together with the affine transformation that precedes it. In this work, we contribute to this line of work by developing a recursive formula that yields a tight convexification for the composition of an activation with an affine function for a wide scope of activation functions, namely, convex or "S-shaped". Our approach can be used to efficiently compute separating hyperplanes or determine that none exists in various settings, including non-polyhedral cases. We provide computational experiments to test the empirical benefits of these convex approximations.

  • Zedong Peng (MIT), MPAX: Mathematical Programming in JAX

    Abstract: We introduce MPAX (Mathematical Programming in JAX), a versatile and efficient toolbox for integrating mathematical programming into machine learning workflows. MPAX implemented the state-of-the-art first-order methods, restarted average primal-dual hybrid gradient and reflected restarted Halpern primal-dual hybrid gradient, to solve linear programming and quadratic programming problems in JAX. It provides native support for hardware accelerations along with features like batch solving, auto-differentiation, and device parallelism. Extensive numerical experiments demonstrate the advantages of MPAX over existing solvers. The solver is available at https://github.com/MIT-Lu-Lab/MPAX.

  • Sergio García Quiles (University of Edinburgh), Some preprocessing and polyhedral results on facility location with preferences [Slides]

    Abstract: The simple plant location problem with order (SPLPO) is a generalization of the well-known simple plant location problem (SPLP) where preferences for the customers are considered. In the well-known version without preferences facilities must be chosen among a set of candidates (locator agent) and each customer from a given set must be allocated to one of the open facilities (allocator agent) so that the total cost (location plus allocation) be minimum. When preferences are added, each customer ranks the facilities from the most preferred to the least preferred. Once the decision on what facilities to open is taken, each customer is allowed to attend their most preferred open facility. These preferences may be such that the criteria of the locator and allocator are totally aligned (and thus, the preferences constraints that we have added are redundant), or they be such that the locator and allocator have completely opposite criteria (making the choice of what facilities to open critical to avoid large costs), or anything in between.

    The problem can be formulated by simply adding an extra family of inequalities to the SPLP. However, this new formulation opens the door to several interesting results obtained from a careful analysis of the problem. Two of them that I will illustrate are: 1) being able to substitute an important number of the original preferences constraints with fewer constraint that provide a tighter formulation by finding that certain preference sets have empty intersection and then finding heuristically maximal cliques on a certain graph, and 2) observing that about more than 90% of the constraints $x_{ij} \le y_j$, which are all facets for the SPLPO, are not facets for the SPLPO due to a particular dominance hierarchy of constraints $x_{i1,j} \le x_{i2,j}$.

  • Thiago Serra (University of Iowa), Optimal Combinatorial Testing with Constraints: The Balancing Act

    Abstract: Imagine that you are in front of a cockpit with several on--off buttons. If you were to thoroughly test it, you would need to try a prohibitive number of configurations. But since most bugs in practice can be isolated to interactions among few components, having tests that cover every possible pairwise configuration is a good start. However, this is a problem that goes from easy to NP--hard as soon as some pairwise configurations are forbidden---and constraints are inevitable in real life applications of optimization. In this work, we revisit unconstrained combinatorial testing with pairwise coverage on binary parameters and contrast it with the constrained case, showing or conjecturing properties that are upheld or that change from one to the other. In particular, we discuss the extent to which it remains a good idea---and sometimes indeed optimal---to try turning every button almost as many times on as off to minimize testing. Using integer programming, we propose an exact algorithm and a faster heuristic that often produces provably optimal solutions.

  • Junlong Zhang (Tsinghua University), Construction of Value Functions of Integer Programs with Finite Domain [Slides]

    Abstract: Value functions play a central role in integer programming duality, and they are also used to develop solution methods for stochastic integer programs, bilevel integer programs and robust optimization problems. In this paper, we propose a column-by-column algorithm for constructing the value functions of integer programs with finite domain over the set of level-set minimal vectors. The proposed algorithm starts with the first column and sequentially adds the rest of columns one by one. Each time a column is added, a new set of level-set minimal vectors is generated based on the previous set and the optimal objective values over the level-set minimal vectors are also computed. The advantage of the proposed algorithm is that no integer program needs to be solved in the algorithm for instances with nonnegative constraint matrices. Computational results on benchmark instances show that the proposed algorithm can achieve by up to three orders of magnitude speedup compared with a state-of-the-art algorithm. We also extend the proposed algorithm to build value functions of integer programs with negative elements in the constraint matrix.

Land-Doig MIP Computational Competition

The 2025 MIP Workshop hosted a competition for research teams to showcase their practical, computational prowess. This year, the competition was renamed to the Land-Doig MIP Competition in honor of Ailsa H. Land and Alison G. Harcourt (née Doig), with permission.

All details can be found here.

The winners of the computational competition are Gioni Mexi, Deborah Hendrych, Sébastien Designolle, Mathieu Besançon, and Sebastian Pokutta for their work "A Frank-Wolfe-based Primal Heuristic for Quadratic Mixed-integer Optimization".

compwinner
compwinner

Honorable mentions were awarded to:

  • Yongzheng Dai for his work "Modified Eigenvalue Method for Nonconvex MIQCQP and Parallel Local Branching", and
  • Suri Liu and Stefan Minner for their work "A Classification-and-Relaxation Method for Solving Quadratic Mixed Integer Programming".

comphonor
comphonor
comphonor

Organization

Sponsors

Federal Sponsors

AFOSR: This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-25-1-0166.
NSF

Gold "Facet-Defining Cut" Tier

U of Minnesota, ISyE
Georgia Tech ISyE

Silver "Supporting Cut" Tier

Google

Bronze "Valid Cut" Tier

MOSEK Aps
Cardinal Operations Technology (Beijing) Co. Ltd
Gurobi Optimization

FICO: Fair Isaac Services Ltd
Lehigh University Department of Industrial and Systems Engineering
The Optimization Firm

Northwestern University IEMS
Texas A&M ISE
University of Florida, Industrial & Systems Engineering

Local Sponsors

Entourage Events Group

Credits

Logo-Design by ©Julia Silbermann (Font-Credits: Cornerstone, by Zac Freeland). Website hosted at GitHub Pages. Layout based on skeleton using Raleway font

.