May 23-26, 2022
DIMACS, Rutgers University
The Mixed Integer Programming (MIP) Workshop is a single-track workshop highlighting the latest trends in integer programming and discrete optimization, with speakers chosen by invitation.
The MIP 2022 edition of the workshop will be the nineteenth in the MIP series, and it will be opened by DANniversary, a special conference in celebration of Daniel Bienstock's 60th birthday.
Please register here to attend the MIP 2022 workshop. The registration fee is $30 before April 15, and $45 between April 15 and May 15.
Danniversary opening remarks
Abstract: A vector is dyadic if each of its entries is a dyadic rational number, i.e.if it has an exact floating point binary representation. We study the problem of finding a dyadic optimal solution to a linear program, if one exists. We present theorems of the alternative for the existence of a solution, polynomial algorithms for solving dyadic linear programs, and we study dyadic dual solutions. For set covering problems with an ideal constraint matrix, Seymour conjectured that an optimal dyadic dual solution always exists. This is joint work with Ahmad Abdi, Bertrand Guenin and Levent Tuncel.
Abstract: One of the most effective methods for solving large-scale integer programming models is branch and price, which relies on column generation to solve the linear programming relaxation. Working with a restricted set of variables (or columns), column generation iteratively adds new variables to the model until an optimal basis is found. In this presentation, we propose an alternative approach that instead works with a relaxed set of columns, from which infeasible ones are iteratively eliminated. As the total number of columns can typically be exponentially large, we use relaxed decision diagrams to compactly represent and manipulate the set of columns. We discuss methodological insights and experimentally compare column elimination and column generation on two applications: graph coloring and truck-drone routing.
Abstract: Random projections are random matrices that decrease the dimensionality of a finite set of vectors while guaranteeing approximate congruence of the high and low dimensional point sets. Their application to Mathematical Programming yield projected formulations with fewer constraints or variables (or, occasionally, both), which can be solved faster than their full-dimensional counterparts, and provide bounds on the optimal value as well as approximately feasible solutions. I will summarize the work done so far in LP, SDP, QP.
Abstract: Cutting planes for mixed-integer linear programs (MILPs) are typically computed in rounds by iteratively solving optimization problems. Instead, we reframe the problem of finding good cutting planes as a continuous optimization problem over weights parametrizing families of valid inequalities. We propose a practical two-step algorithm, we expose some surprising properties of the generated cuts, and we demonstrate empirical gains when optimizing Gomory mixed-integer inequalities over various families of MILPs. Finally, a reinterpretation of our approach as optimization of the subadditive dual using a deep neural network is provided. (This is joint work with Didier Chételat.)
Abstract: The maximum-entropy sampling problem (MESP) is to select a subset, of given size s, from a set of correlated Gaussian random variables, so as to maximize the "differential entropy". If C is the covariance matrix, then we are simply seeking to maximize the determinant of an order-s principal submatrix. A key application is for the contraction of an environmental-monitoring network. MESP sits within the intersection of optimization and data science, and so it has attracted a lot of recent attention. The problem is NP-hard, and there have been algorithmic attacks aimed at exact solution of moderate-sized instance for three decades. It is a fascinating problem from the perspective of integer nonlinear optimization, as it does not easily fit within a framework that is successfully attacked via available general-purpose paradigms. I will give an overview of algorithmic work, concentrating on recent developments.
Abstract: A classical approach for obtaining valid inequalities for a set involves the analysis of relaxations constructed using aggregations of the inequalities that describe such set. When the set is described by linear inequalities, thanks to the Farkas lemma, we know that every valid inequality can be obtained using aggregations. When the inequalities describing the set are two quadratics, Yildiran (2009) showed that the convex hull of the set is given by at most two aggregated inequalities. In this work, we study the case of a set described by three or more quadratic inequalities. We show that, under technical assumptions, the convex hull of a set described by three quadratic inequalities can be obtained via (potentially infinitely many) aggregated inequalities. We also show, through counterexamples, that such a result does not hold if either the technical conditions are relaxed or if we consider four or more inequalities. This is joint work with Santanu Dey and Felipe Serrano.
Abstract: This talk discusses various techniques that can be applied to polynomial optimization problems, with particular emphasis on adapting the feasibility pump as a primal heuristic for polynomial optimization (ongoing joint work with Dan, Gonzalo, and his student Pablo). I also discuss my work on a branch-and-cut solver and how it led to working for Dan on developing intersection cuts for polynomial optimization. A unifying concept is that the standard extended formulation for polynomial optimization, where any such problem can be represented as a linear (or semidefinite) program with nonconvex side constraints. Many techniques from integer programming (LP + integer side constraints) can thus be ported over to polynomial optimization.
Abstract: dano3mip has entered MIPLIB 3.0 in 1996. It is still a member of MIPLIB 2017 and no proven optimal solution been computed so far. What is the state of affairs? Will Quantum Computers finally solve it? What can we do?
Abstract: We show on an exemplary Optimum Power Flow (OPF) setting how to substitute it with a machine learning approach to learn locational marginal prices (LMPs) of generators. Rather than direct solution of OPF, the Karush -- Kuhn -- Tucker (KKT) conditions for the OPF problem in question may be written, and in parallel the LMPs of generators and loads may be expressed in terms of the OPF Lagrange multipliers. The KKT conditions provide a means of algebraically solving for these Lagrange multipliers. However, many of the variables inherent in this solution will be zero, corresponding to power lines which are not saturated at their thermal limits. We propose a classification scheme to determine which of the power lines are saturated, thus highlighting the nonzero Lagrange multipliers that must be solved for. This is a joint work with R. Ferrando, L. Pagnier, Y. Dvorkin and D. Bienstock
Abstract: I’ll talk about my collaboration with Dan starting in the early 1990s on network design problems to recent work on analytical estimation of multicommodity flows. I’ll provide in passing anecdotes of Dan remarkable skills not only in solving the former problems numerically but also in his choice of apt variable names in his code.
Abstract: My own background and life story have a lot in common with my friend Dan Bienstock’s. Both of our family backgrounds were strongly shaped by World War II and the holocaust, which Dan’s grandparents managed to escape by emigrating from Europe to Uruguay. Since we converged in the Boston area in the late 1970’s, there have been many similarities between my path and Dan’s. We first met as doctoral students at the MIT Operations Research Center (ORC) in February 1984. Dan created an unusual and sometimes festive atmosphere into the MIT ORC, something that I did not anticipate based on my earlier brief stint as a doctoral student in engineering. I will be sure to provide some embarrassing details.
Dinner at The Frog & The Peach
|09:15-09:30:||MIP opening remarks|
Abstract: We give a strongly polynomial-time algorithm for integer linear programs defined by integer coefficient matrices whose subdeterminants are bounded by a constant and that contain at most two nonzero entries in each row. The core of our approach is the first polynomial-time algorithm for the weighted stable set problem on graphs that do not contain more than k vertex-disjoint odd cycles, where k is any constant. Previously, polynomial-time algorithms were only known for k=0 (bipartite graphs) and for k=1. We observe that integer linear programs defined by coefficient matrices with bounded subdeterminants and two nonzeros per column can be also solved in strongly polynomial-time, using a reduction to b-matching. This is joint work with Gwenaël Joret (ULB, Brussels), Stefan Weltge (TU Munich) and Lena Yuditsky (ULB, Brussels)
Abstract: We present an extension to Generalized Disjunctive Programming (GDP) to model hierarchical systems via nested disjunctions. We denote these more general GDP models as Nested GDPs (NGDPs). NGDPs are compared to their Equivalent GDP formulations. Reformulations to algebraic optimization models of NGDPs and the Equivalent GDPs are formalized. The two approaches are compared in terms of resulting model tightness and size (number of discrete and continuous variables, and constraints). Computational results are presented with examples from the areas of process flowsheet synthesis and optimization of electric power expansion planning. The results show the value in using NGDPs to systematically model and solve real-world problems.
Abstract: A binarization of a variable x is a linear formulation though additional binary variables, whose integrality implies the integrality of x. A binary extended formulation of a polyhedron P is obtained by adding to the original description of P binarizations of some of its variables. In the context of mixed-integer programming, imposing integrality on 0/1 variables rather than on general integer variables has interesting convergence properties and has been studied both from the theoretical and from the practical point of view. We propose a notion of natural binarizations and binary extended formulations, encompassing all the ones studied in the literature. We give a simple characterization of the vertices of such formulations, which allows us to study their behavior with respect to sequential convexification. In particular, given a binary extended formulation and one of its variables x, we study a parameter that measures the progress made towards ensuring the integrality of x via application of sequential convexification. We formulate this parameter, which we call rank, as the solution of a set covering problem and express it exactly for the classical binarizations from the literature.
Abstract: We establish a polynomial equivalence between tree ensemble optimization and optimization of multilinear functions over the Cartesian product of simplices. We use this insight to derive new formulations for tree ensemble optimization problems and to obtain new convex hull results for multilinear polytopes. A computational experiment on multi-commodity transportation problems with costs modeled using tree ensembles shows the practical advantage of our formulation relative to existing formulations of tree ensembles and other piecewise-linear modeling techniques. Motivated by the inability of tree ensembles to model multilinear functions of continuous variables, we then propose a generalization of decision-trees and provide empirical evidence of a better out-of-sample fit for a large collection of randomly perturbed nonlinear functions. This is joint work with Jongeun Kim (University of Minnesota) and Mohit Tawarmalani (Purdue).
Abstract: We study a generalization of the warehouse problem with time dependent bounds on purchase, sale and storage quantities as well as fixed costs and complementarity constraints on purchases and sales. This variant is particularly relevant to applications to energy markets. For the general case we present extended LP formulations and pseudo-polynomial time algorithms. For some common special cases we present first polynomial time algorithms. Our approach can handle certain extensions including the stochastic version of the problem.
Abstract: In this talk I'll try to explain ""monoidal strengthening"", a technique introduced by Balas and Jeroslow in the 80's to strengthen intersection cuts by exploiting the integrality of non-basic variables. I'll discuss some limitations of the technique and then explain how one of these limitations forces us to introduce a condition for the application of monoidal strengthening to disjunctive cuts. Finally, I'll mention how monoidal strengthening can be used to strengthen some intersection cuts for quadratic mixed-integer programming.
|4:00-5:00||Computational competition presentations & results|
Abstract: We consider in this talk separable plus quadratic (SPQ) polynomials, i.e., polynomials that are the sum of univariate polynomials in different variables and a quadratic polynomial. Motivated by the fact that nonnegative separable and nonnegative quadratic polynomials are sums of squares, we answer two questions around SPQ polynomials: namely, whether nonnegative SPQ polynomials are (i) the sum of a nonnegative separable and a nonnegative quadratic polynomial, and (ii) a sum of squares. We also present applications of SPQ polynomials to upper bounding sparsity of solutions to linear programs and a generalization of Newton’s method which incorporates separable higher-order derivative information. Joint work with Amir Ali Ahmadi and Cemil Dibek.
Abstract: In this talk, I will discuss some recent progress in combinatorial optimization. In particular, I will explain the nearly linear time algorithm for maxflow and nearly matrix multiplication time algorithms for linear programs. These algorithms are based on a careful combination of optimization techniques and data structures. Some of the common ideas are generic and are useful beyond optimization.
Abstract: In this talk, we focus on mixed binary programming problems and present a branch-and-bound algorithm based on adding penalty terms to the objective function along the nodes. The resulting scheme addresses a sequence of continuous problems that share the same feasible set, while the objective function slightly changes so that the penalization of the integrality constraint violation is progressively increased. In the context of mixed integer linear complementarity problems (MILCPs), enhancement of the method as tailored valid inequalities, node selection strategies, branching rules, and warmstarting techniques are explored. A numerical comparison on MILCP instances comparing the branch-and-bound method with two benchmark approaches from the literature are presented.
Abstract: We study how to integrate rider mode preferences into the design of emerging On-Demand Multimodal Transit Systems (ODMTS). We propose a bilevel optimization model to address this challenge, in which the leader problem determines the ODMTS design, and the follower problems identify the most cost efficient and convenient route for riders under the chosen design. The leader model contains a choice model for every potential rider that determines whether the rider adopts the ODMTS given her proposed route. To solve this model, we propose an exact decomposition method that includes Benders optimal cuts and nogood cuts to ensure the consistency of the rider choices in the leader and follower problems. Moreover, to improve computational efficiency, we derive upper bounds on trip durations for the follower problems and valid inequalities that strenghten the nogood cuts. The proposed method is validated using an extensive computational study on a real data set on Ann Arbor and Ypsilanti region in Michigan. The designed ODMTS feature high adoption rates and significantly shorter trip durations compared to the existing transit system and highlight the benefits in accessibility for low-income riders. Finally, the computational study demonstrates the efficiency of the decomposition method that improve the baseline method by several orders of magnitude.
Abstract: Budget constraints are ubiquitous in online advertising auctions. To manage these constraints and smooth out the expenditure across auctions, the bidders (or the platform on behalf of them) often employ pacing: each bidder is assigned a multiplier between 0 and 1, and her bid on each item is multiplicatively scaled down by the multiplier. This naturally gives rise to a game in which each bidder strategically selects a multiplier, which gives rise to the pacing equilibrium solution concept. In this work, we show that the problem of finding an approximate pacing equilibrium is PPAD-complete for second-price auctions. This resolves an open question of [CKSSM17]. As a consequence of our hardness result, we refute the conjecture by [BCI+07] that tâtonnement-style budget-management dynamics will converge efficiently for repeated second-price auctions. Our hardness result also implies the existence of a refinement of supply-aware market equilibria which is hard to compute with simple linear utilities. Interestingly, the computational hardness for this problem also translates into a seemingly-difficult problem for MIP solvers. I will show some experimental results on small-scale instances where we are unable to find solutions at around 15 buyers and auctions.
Abstract: In finite graphs, the problems of finding a minimum weight spanning tree (MinST) and the maximum weight spanning tree (MaxST) can both be solved by the same greedy algorithms (for instance, Prim's algorithm). In the case of infinite graphs, however, we illustrate a general class of problems where a greedy approach can be used to discover a MaxST while a MinST may be unreachable by any greedy approach. In the case of MinST, we develop a ""layered greedy algorithm"" the finds minimum spanning trees in finite ""chunks"" and show that the iterates converge to the MinST under certain conditions on the graph. This is joint work with Robert L. Smith and Marina A. Epelman
Abstract: Whereas interior point methods provide polynomial-time linear programming algorithms, the running time bounds depend on bit-complexity or condition measures that can be unbounded in the problem dimension. This is in contrast with the Simplex method that always admits an exponential bound. We introduce a new polynomial-time path-following interior point method where the number of iterations also admits a singly exponential upper bound. This complements previous work by Allamigeon, Benchimol, Gaubert, and Joswig (SIAGA 2018) that exhibited a family of instances where any path-following method must take exponentially many iterations. For an n variable linear program in standard form, the number of iterations of our algorithm is at most O(n1.5 log n) times the number of segments of any piecewise linear curve in the wide neighbourhood of the central path. In particular, it matches the number of iterations of any path following interior point method up to this polynomial factor. The overall exponential upper bound derives from studying the ‘max central path’, a piecewise-linear curve with the number of pieces bounded by the total length of 2n shadow vertex simplex paths.
Abstract: The Weighted Tree Augmentation Problem (WTAP) is one of the most basic connectivity augmentation problems. It asks how to increase the edge-connectivity of a given graph from 1 to 2 in the cheapest possible way by adding some additional edges from a given set. There are many standard techniques that lead to a 2-approximation for WTAP, but despite much progress on special cases, the factor 2 remained unbeaten for several decades. In this talk we present two algorithms for WTAP that improve on the longstanding approximation ratio of 2. The first algorithm is a relative greedy algorithm, which starts with a simple, though weak, solution and iteratively replaces parts of this starting solution by stronger components. This algorithm achieves an approximation ratio of (1 + ln 2 + epsilon) < 1.7. Second, we present a local search algorithm that achieves an approximation ratio of 1.5 + epsilon (for any constant epsilon > 0). We then show how our local search technique can be applied to Steiner Tree, leading to an alternative way to obtain the currently best known approximation factor of ln 4 + epsilon. This is joint work with Rico Zenklusen.
Abstract: Domain specific languages (DSL) for mathematical optimization allow users to write problems in a natural algebraic format. However, what is considered natural can vary from user to user. For instance, JuMP’s interface makes a distinction between conic formulations and nonlinear programming formulations whose constraints are level-sets of nonlinear functions. Tradeoffs between such alternative modeling formats are further amplified when dual solutions are considered. In this talk we describe work related to these tradeoffs in JuMP and OR-Tools. In particular, we consider modeling using a wide range of non-symmetric cones (and their solution with Hypatia.jl) and defining precise dual contracts for two-sided constraints (and other non-conic convex constraints).
Abstract: Aggregation of heating, ventilation, and air conditioning (HVAC) loads can provide reserves to absorb volatile renewable energy, especially solar photo-voltaic (PV) generation. In this work, we decide HVAC control schedules under uncertain PV generation, using a distributionally robust chance-constrained (DRCC) building load control model under Wasserstein ambiguity sets. We derive strengthened mixed integer linear programming (MILP) reformulations for DRCC problems with right-hand side uncertainty. To achieve tradeoff between the operational risk and costs, we propose an adjustable chance-constrained variant. MILP and conic reformulations are derived. Using real-world data, we conduct computational studies to demonstrate the efficiency of the solution approaches and the effectiveness of the solutions.
Abstract: We study an extended trust region subproblem minimizing a nonconvex function over the hollow ball $r \le \|x\| \le R$ intersected with a full-dimensional second order cone (SOC) constraint of the form $\|x - c\| \le b^T x - a$. In particular, we present a class of valid cuts that improve existing semidefinite programming (SDP) relaxations and are $\epsilon$-separable in polynomial time. We connect our cuts to the literature on the optimal power flow (OPF) problem by demonstrating that previously derived cuts capturing a convex hull important for OPF are actually just special cases of our cuts.
Abstract: Diversity plays a crucial role in team formation, representation of minority groups and generally when allocating resources fairly. Given a community composed by individuals of different types, we study the problem of dividing this community such that the global diversity is preserved as much as possible in each subgroup. We consider the diversity metric introduced by Simpson in his influential work that, roughly speaking, corresponds to the inverse of the probability that two individuals are from the same type when taken at random, with replacement. We provide a novel perspective by reinterpreting this quantity in geometric terms. We characterize the instances in which the optimal partition exactly preserves the global diversity in each subgroup. When this is not possible, we provide an efficient polynomial-time algorithm that outputs an optimal partition for the problem with two types. Finally, we discuss further challenges and open questions for the problem that considers more than two types. This is joint work with Sebastian Perez-Salazar and Victor Verdugo.
|12:00-2:00||Best poster award + Closing|
The workshop will be held on the College Avenue Campus of Rutgers University. The activities will take place in the Rutgers Academic Building located at 15 Seminary Place, New Brunswick, NJ. Presentations will be in room 4225, which is located in the East Wing of the building. The main important locations are indicated in this map.
A block of rooms has been arranged with the Hyatt Regency in New Brunswick. We expect to open the block in mid to late March.
New Brunswick has a wide variety of dining options to fit all budgets. Many are within an easy walk of the conference venue and the Hyatt hotel. We have compiled some information on lunch options, quick lunch options (see map), and options for dinner. Recall that we are having a conference dinner on Monday at The Frog & The Peach.
Additional material can be found at the DIMACS page.
In order to attend the MIP workshop, every participant must provide a proof of either full COVID vaccination or a negative COVID PCR test taken within 72 hours of arrival at the workshop. For the safety of everyone in attendance, DIMACS requests that event participants wear masks in the lecture room and other crowded indoor spaces.
More information about the current policy is provided here.