Research Topics

Below you find a description of my research areas, each appearing with the related publications (papers that cross different subjects appear multiple times).

Matching problems

In matching problems we want to pair agents as to respect certain feasibility condition and / or maximize objective functons. Matching problems lie at the intersection of operations research, computer science, and economics, and have historically played a fundamental role in each of those areas. They appear in applications such as job placement, school choice, machine learning, etc. My research in this area has mainly followed three strands.
Matching under preferences. Unlike ''classical'' matching problems, in matching markets arising, e.g., in school choice, agents have preferences of their possible partners, which typically makes the set of feasible matching and/or the objective function more complex. Here, my research has mostly focused on questions such as: can we efficiently obtain certain matchings of probably good quality? If this is not the case, can we define new concepts in matching markets, leading to algorithmically tractable problems? Much of my work here deals with extensions of the classical concept of stability, introduced by Gale and Shapley in 1962. I have studied popular matchings (MP5, MP6, MP8), legal / von Neumann-Morgenstern assignments (MP7, MP14), matchings with blocking costs (MP9), problems where preferences of agents are described by choice functions (MP11), and stable matchings in two-stage settings (MP16). Some of those algorithms rely on and / or extend more general mathematical techniques, such as posets associated to sets of matchings and Birkhoff's representation theorem (see MP11, MP16), or pivoting procedures like Scarf's algorithm (see MP15). I have also investigated real-world applications of the theory of stable matchings and matching markets more generally to school choice (and drawbacks thereof), especially in the presence of bias (MP10) and with the goal of incentivizing the admission of disadvantaged students (MP12). If you are interested in learning more about those topics, the videos below from the bootcamp I gave at ICERM, Brown University, on Matching theory and school choice discuss several problems, results, and applications in the area.

Extensions of classical weighted matching problems. Edmonds' maximum (weighted) matching problem is one of the pillars of discrete optimization. This problem can be equivalently be formulated as a maximum (weighted) independent set problem in an auxiliary graph (''line graph''). Researchers have investigated superclasses of line graphs where the (weighted) stable set problem can be solved efficiently. My work here has focused on the class of claw-free graphs. Stable sets in claw-free graphs have been studied since the 70s. Those graphs have applications in problems arising in wireless networks, scheduling, and machine learning. For the maximum weighted stable set problem in claw-free graphs, we gave (the) fast(est) combinatorial algorithms (MP2, MP3), as well the first non-trivial geometric algorithm (MP4). See also MP1.
Online matching problems. The problem of matching drivers to riders in modern ride-sharing platforms is often modelled as an online matching problem in a bipartite graph, where one side of the market is known in advance (typically, the drivers), while the other comes in an online fashion. Because of the scale of real-world instances, in practice simple algorithms are often used to solve such matching problems. In MP13, we studied in which settings we can expect the simplest algorithm of all, i.e., greedy, to perform well, as a function of the distributions of the locations of servers and drivers. Other online matching problems I am currently investigating include fairness concerns for the two sides of the market.
Acknowledgments: My current work on matching problems is supported by an NSF CAREER award and by a Meta Research award.

Exact and approximate formulations for integer programming problems

Integer Programming (IP) aims at maximizing a linear function over the set of integer points that satisfy a finite set of linear constraints. It is used to model many problems arising in production planning, logistics, and many more settings. In general, IPs are very hard to solve, hence the study of fast general or problem-specific algorithms for their solutions is a crucial area of research. Geometric algorithms for solving integer optimization problems are highly dependendent on the size of the system of constraints describing the feasible region: the larger its size, the larger the running time of the algorithms (in both theory and practice, assuming all other parameters stay the same). An important problem is therefore to find formulations that exactly or approximately describe the original IP, using a small number of constraints. My research in the area has focused on two strands.
Extended formulations. It is often the case that a feasible region can be expressed more succintly as the image under an affine map of a convex set living in an higher dimensional space. Such a convex set is called an extension and its description via a system of constraints an extended formulation for the original feasible region. A typical question I investigate in this area is: what is the minimum number of constraints in a (linear) extended formulation for the feasible region P of a linear program? This is called the extension complexity of P. Papers EF4, EF6, EF9, EF10, EF11, EF12 and the survey EF1 deal with this question for feasible regions of problems arising in combinatorial optimization, including matching problems under preferences (aka matching markets). Papers EF2, EF8 deal with more general integer and quadratic problems.
Somehow suprisingly, questions on extension complexity can be reformulated, and sometimes solved, using techniques from seemingly far away areas of mathematics and computer science, like algebraic geometry and boolean Fourier analyis. I am / have been investigating some of these connections. In EF3, EF7 we explore in particular the relationship between extension complexity and communication protocols, a classical topic in theoretical computer science. In EF5 we bound the minimum increase in the number of variables needed to obtain certain extended formulations with ''few'' constraints. Current ongoing work also deals with the relationship between extended formulations and Sum of Squares proofs.
Acknowledgments: My current work in this area is supported by an award from the AFOSR. Previously, it was supported by an award from the ONR, and an award and then a gift from the Swiss National Science Foundation.

Cutting planes. In order to obtain an approximation to the value of the optimal solution to an IP, a classical approach relaxes the integrality constraints, and solves the (much easier) resulting linear program. However, the solution produced by this technique may be very far away from the optimum integer solution. We want therefore to improve the formulation of the problem by adding additional inequalities called cutting planes (or cuts).
Which cutting planes allow us to obtain a (much) better approximation to the optimal integer solution? Which moreover lead to a formulation that can be easily solved? In CP1, CP2, CP4, we investigate properties of cutting planes that are valid for any IP, that go under the name of Chvatal-Gomory and Split cuts. In CP3, CP6, CP7 we investigate families of cutting planes that are valid for knapsack problems. In CP5 we investigate how Machine Learning techniques can be used to guide the selection of the best cutting planes.

Knapsack and other problems in combinatorial optimization

Many problems in Discrete Optimization can be formulated in a combinatorial way as the problem of choosing a set of maximum profit from a finite family of feasible sets. In this area, I have investigated fast algorithms, hardness, and hardness of approximation for graph optmization problems such as maximum stable (independent) set (KP3), coloring (KP2), Traveling Salesman Problem (KP9), total matching (KP10), and for invariant-preserving reductions (KP1). Those problems model applications arising, eg, in allocation of scarce resources such as bandwith, classes,....
I have moreover been very interested in knapsack problems. In the classical knapsack problem we want to select a set of objects as to maximize a linear objective function, subject to the constraint that the total weight of the selected objects does not exceed a given capacity. (Extensions of) Knapsack are widely used to model research allocation and financial planning problems. Combinatorial algorithms for knapsack are very well-studied. However, less is known on geometric approaches, both for knapsack and for its minimization counterpart, called minimum knapsack. Some questions on geometric approaches to these problems are studied in KP4, KP6, KP7.
In order to model real-world scenarios, several extensions of knapsack have been introduced. One such generalization assumes that the capacity of the knapsack increases over time (as to model a growth in the budget for investments, of the avaialable resources,...). This class of extensions goes under the name of incremental knapsack problems. In KP5, KP8 we study algorithms for certain incremental knapsack problems, and their connections with problems in scheduling. For more problems in combinatorial optimization such as matching and stable set problems that generalize matching, see this section.

Distributional and data-dependent analysis of algorithms

Classical theory evaluates algorithms in the worst-case scenario. For instance, the quality of an algorithm for the Traveling Salesman problem is determined by the instance where it performs the worst. However, real-world data is rarely adversarial, or even fully random. It is therefore important to analyze and compare algorithms under more restrictive assumptions on the input data, in order to develop a theory of algorithms that better matches the practice, and at the same time to inform practitioners about which algorithm they should use, depending on the assumptions they can make on the data.
In DD1, DD2 we study the effectiveness of simple local search and greedy algorithms for the Traveling Salesman problem and the minimum weight perfect matching problem respectively, showing that under fairly general assumptions on the data these simple algorithms perform almost optimally. We also discuss the robustness of those assumptions, showing that when they are relaxed a bit the performance of the algorithms quickly degrades.
In DD3, we investigate two popular algorithms for assigning seats to disadvantaged students in school choice markets, and give a sufficient condition, that we term high competitiveness, under which one algorithm can be proved to outperform the other (from the point of view of disadvantaged students). We also give evidence showing that this assumption is verified in relevant markets.

Discrete Optimization ↔ Machine Learning

Typically, researchers in machine learning have looked at continuous optimization for techniques to solve their problems. However, in the last years, there has been a surge of research at the intersection of machine learning and discrete optimization. On one hand, can discrete optimization techniques solve problems arising in machine learning? On the other hand, can machine learning be used to solve complex discrete optimization problem faster?
My research here has focused on showing how certain impossibility results in machine learning can be transferred to results of a similar flavour in optimization (ML2), as well as in showing how machine learning can be embedded in integer programming solvers, as to increase their performance (ML1).

Miscellanea on polytopes

I have further worked on other geometric problems, mostly on polytopes. These include: an investigation of an intriguing yet very common class of polytopes called 2-level, with connections to statistic and the celerated log-rank conjecture in theoretical computer science, see MI2, MI3; algorithms for finding certain substructures in polytopes, with connections to submodular functions and largest subdeterminants of matrices, see MI1, MI4; pivoting algorithms, see MI5.