# Research Seminar

The Research Seminar is a weekly seminar with talks by group members, guests and students on recent results in combinatorial optimization, algorithm design and related fields. Everyone is very welcome. If you have suggestions for talks or wish to present your own interesting results, or if you would like to subscribe to the list for talk announcements, feel free to send an email to the organizer, Lukas Nölke, or stop by in the 3rd floor of MZH for a cup of coffee.

Due to the current situations, all talks will take place virtually as a video conference. If you are interested in attending a particular talk, please contact Lukas Nölke.

## Upcoming Talks

Machine-learned predictors, although achieving very good results for inputs resembling training data, cannot possibly provide perfect predictions in all situations. Still, decision-making systems that are based on such predictors need not only to benefit from good predictions but also to achieve a decent performance when the predictions are inadequate. In this paper, we propose a prediction setup for arbitrary metrical task systems (MTS) (e.g., caching, k-server and convex body chasing) and online matching on the line. We utilize results from the theory of online algorithms to show how to make the setup robust. Specifically for caching, we present an algorithm whose performance, as a function of the prediction error, is exponentially better than what is achievable for general MTS. Finally, we present an empirical evaluation of our methods on real world datasets, which suggests practicality.

## Past Talks

Laura Vargas Koch (RWTH Aachen)

Abstract: Modeling traffic in road networks is a widely studied but challenging problem, especially under the assumption that drivers act selfishly. A common approach is the deterministic queuing model. The basic idea is to model traffic by a continuous flow that travels over time through a network, in which the arcs are endowed with transit times and capacities. Whenever the flow rate exceeds the capacity the flow particles build up a queue. So far it was not possible to represent spillback and kinematic waves in this model. By introducing a storage capacity arcs can become full, and thus, might block preceding arcs, i.e., spillback occurs. Moreover we add a gap flow to represent kinematic waves. Visually, this means that on a full road cars can only drive when there is a gap in front and theses gaps travel backwards over the road. We carry over the main results of the original model to our generalization, i.e., we characterize dynamic equilibria by sequences of particular static flows, so-called spillback thin flows. This is joint work with Leon Sering.

Alexander Lindermeyr (University of Bremen)

Abstract: In 2019, Garg et al. proposed a novel algorithm for

non-clairvoyant precedence constrained scheduling with weighted flow and

completion time objectives. We will look at their round-robin-like

algorithm, which is almost identical for both objectives, and

particularly at their dual-fitting-based analysis.

What does it mean for a graph to almost be planar? Or to almost have bounded degree? On some simple graphs classes, some difficult algorithmic problems become tractable. Ideally, one would like to use (or adapt) existing algorithms for graphs that are "almost" in a tractable class.

In this talk, I will discuss the notion of elimination distance to a class C, a notion introduced by Bulian and Dawar (2016). The goals of the talk are:

1) Define this notion, and discuss why it is relevant by presenting some existing results.

2) Show that we can compute the elimination distance of a given graph to the class of graph of degree at most d (in some cases).

I.e. answer the question: "Is this graph close to a graph of bounded degree?"

The second part is the result of a collaboration with Alexandre Lindermayer and Sebastian Siebertz.

Consider a matching problem with two sets of participants each of whom ranks the participants of the other set in order of their individual preferences. Using the deferred acceptance algorithm by Gale and Shapley (1962) both of the one-sided rank-maximal stable matching can be obtained from the instance. The stability is an essential property in the setting of matchings under preferences and often considered in the sense of fairness. Meanwhile, a matching is called rank-maximal if it maximizes the number of participants assigned to their first choice and, subject to that their second choice, and so on.

However, in some applications two-sided rank-maximality is pursued which the deferred acceptance algorithm cannot provide. In this talk I present the algorithm introduced in the paper of Cooper and Manlove (2020) based on the work of Irving, Leather and Gusfield (1987) that starts with the extended Gale-Shapley algorithm and terminates in an egalitarian (i.e. two-sided) rank-maximal stable matching.

Parikh automata on finite words (PA) were first introduced by Klaedtke and Rueß. This extension of finite automata processes sequences of symbols that are labeled with vectors of natural numbers. Such a word is accepted if there is an accepting run in the automaton and the sum of the vectors lies in a given semi-linear set. In the first part, we study PA where the size of the vectors is bounded and compare the respective classes of languages to the Chomsky hierarchy. In the second part, we introduce Parikh automata on infinite words. We present two variants and characterize the respective classes of languages similar to Büchi's theorem. We conclude by giving an overview of their closure properties and common decision problems.

We consider the following online problem of min-cost perfect matching with concave delays: the input is a set of requests arriving online. In the mono-chromatic setting, any two requests can be matched into a pair; in the bi-chromatic setting, each request has a polarity (either positive or negative) and only requests of different polarities can be matched into a pair. At each point in time, the algorithm must choose between matching a pair of requests or delaying them to be matched at a later time. Let D denote a concave delay function. For any request r, suppose it arrives at time a(r) and is matched with another request at time m(r). Then a cost of D(m(r) – a(r)) is charged due to delaying this request for a duration of m(r) – a(r). The target is to minimize the total delay cost produced by the algorithm, i.e., \sum_{r} D(m(r) – a(r)). In this talk, we will introduce O(1)-competitive online algorithms for both the mono-chromatic and bi-chromatic settings.

Joint work with Yossi Azar and Danny Vainstein, accepted for SODA 2021.

My current research mainly focuses on the parallel-batch machines with pulse interruptions (PI). PI derives from the availability constraints. The availability constraints assume that there is an unavailable interval. We study the unavailable interval, the length of which is tend to 0. We name this kind of unavailable interval as a pulse interruption. In fact, the pulse interruptions divide the scheduling period into several time intervals, we aim at how to schedule jobs as batches into these time intervals. And now we consider two problem: the one is that online scheduling on a single parallel-batch machine with a pulse interruption; the other one is that online batching scheduling on parallel machines with periodic pulse interruption.

Online optimization captures relevant real-world problems, where the input is unknown in advance and arrives over time. Therefore, classical online algorithms make no assumptions on the future, which gives their analysis a pessimistic flavor. Indeed, in many real-world applications we can make statements about the future using modern machine-learning approaches to predict it. In order to still achieve performance guarantees in such a setting, Lykouris and Vassilvitskii recently introduced a framework which measures the efficiency of such algorithms by analyzing them with respect to the quality of the given prediction, while making no assumptions on the generation of these.

We apply this framework to the k-server problem on the line, which is a well-studied and fundamental online problem. There exists a simple, deterministic, k-competitive online algorithm, which is optimal. We propose a family of learning-augmented algorithms for this problem and bound their competitive ratio with respect to the prediction's quality, such that they break the lower bound of k given good predictions. For some algorithms of the family we also prove a general performance guarantee, independently of the prediction.

For the special case of two servers, we show that this family is in a certain sense optimal among all learning-augmented algorithms for the problem.

We also state a modification of the algorithms for tree metrics and achieve almost the same results as on the line.

For the 2-taxi problem, which was recently introduced by Coester and Koutsoupias and generalizes the 2-server problem, we present a similar approach using machine-learned predictions, and again circumvent existing lower bounds.

Finally, we empirically evaluate the proposed algorithms and illustrate their applicability in practice.

This thesis shows how online job scheduling algorithms can be improved using predictions about the processing times of jobs. We will take a closer look at the existing approach of combining the algorithms Round-Robin and Shortest Job First to improve their competitive ratio and transfer the concept to the weighted case. We establish a bound for the competitive ratio and show that the proposed algorithm performs as expected in practice by analyzing its performance across different instances in simulations.

Andreas Wiese (Universidad de Chile)

Independent set is a fundamental problem in combinatorial optimization. While in general graphs the problem is essentially inapproximable, for many important graph classes there are approximation algorithms known in the offline setting. These graph classes include interval graphs and geometric intersection graphs, where vertices correspond to intervals/geometric objects and an edge indicates that the two corresponding objects intersect. We present the first \dynamic approximation algorithms for independent set of intervals and geometric objects. They work in the fully dynamic model where in each update an interval/geometric object is inserted or deleted. Our algorithms are deterministic and have worst-case update times that are polylogarithmic for constant d and epsilon. We achieve the following approximation ratios:

- For independent set of intervals, we maintain (1+ϵ)-approximate solutions for the unweighted and the weighted case.

- For independent set of d-dimensional hypercubes we maintain (1+ϵ)2d-approximate solutions in the unweighted case and O(2d)-approximate solutions in the weighted case. Also, we show that for maintaining unweighted (1+ϵ)-approximate solutions one needs polynomial update time for d≥2 if the ETH holds.

- For weighted d-dimensional hyperrectangles we present a dynamic algorithm with approximation ratio (1+ϵ)log^{d−1}N, assuming that the coordinates of all input hyperrectangles are in [0,N]d and each of their edges has length at least 1.

joint work with Monika Henzinger and Stefan Neumann

Giorgio Lucarelli (Grenoble INP)

A well-identified issue in algorithms and, in particular, in online computation is the weakness of the worst case paradigm. Several more accurate models have been proposed in the direction of eliminating this weakness. Resource augmentation is one of these models which has been successfully used for providing theoretical evidence for several heuristics in scheduling with good performance in practice. According to it, the algorithm is applied to a more powerful environment than that of the adversary. Several types of resource augmentation for scheduling problems have been proposed up to now, including speed augmentation, machine augmentation and more recently rejection. In this context, we propose algorithms for online scheduling problems for minimizing variations of the total weighted flow time objective based on mathematical programming and a primal-dual scheme.

Franziska Eberle (University of Bremen)

We study a fundamental online job admission problem where jobs with processing times and deadlines arrive online over time at their release dates, and the task is to determine a preemptive single-server schedule which maximizes the number of jobs that complete on time. To circumvent known impossibility results, we make a standard slackness assumption by which the feasible time window for scheduling a job is at least (1+epsilon) times its processing time, for some epsilon>0. We consider a variant of the online scheduling problem where the provider has to satisfy certain commitment requirements. These requirements arise, for example, in modern cloud-services, where customers do not want last-minute rejections of critical tasks and request an early-enough provider-side commitment to completing admitted jobs.Our main contribution is an optimal algorithm for online job admission with commitment. When a provider must commit upon starting a job, our bound is O(1/epsilon). This is best possible as there is a lower bound of Omega(1/epsilon) for online admission even without commitment. If the commitment decisions must be made before a job's slack becomes less than a delta-fraction of its size, we prove a competitive ratio of O(epsilon/((epsilon - delta) delta)) for 0<delta<epsilon. This result interpolates between commitment upon starting a job and commitment upon arrival. For the latter commitment model, it is known that no (randomized) online algorithm does admit any bounded competitive ratio.

Veerle Tan-Timmermans (RWTH Aachen University)

Given two matroids and on a common ground set, some integer k, and two cost functions on the ground set of resources, we consider the optimization problem to find a basis X in matroid 1 and Y in matroid 2 minimizing the aggregated cost of resources in bases X and Y, subject to either a lower bound, upper bound or equality constraint on the size of the intersection of the two bases X and Y. The problem with lower bound constraint turns out to be a generalization of the Recoverable Robust Matroid problem under interval uncertainty representation for which the question for a strongly polynomial-time algorithm was left as an open question [Hradovich, 2017]. We show that the two problems with lower and upper bound constraints on the size of the intersection can be reduced to weighted matroid intersection, and thus be solved with a strongly polynomial-time primal-dual algorithm. The question whether the problem with equality constraint can also be solved efficiently turned out to be a lot harder. As our main result, we present a strongly polynomial, primal-dual algorithm for the problem with equality constraint on the size of the intersection. Additionally, we discuss generalizations of the problems from matroids to polymatroids, and from two to three or more matroids.

Marc Goerigk (University of Siegen)

In this talk I focus on combinatorial optimization problems under uncertainty. In the most basic (robust) setting, there is a set of cost vectors given, and the question is to find one solution such that the largest of the possible objective functions is as small as possible. A variant of this setting is two-stage robustness: Here, we only need to make a partial decision in the first stage, under known first-stage costs. Then the uncertain second-stage cost vector is revealed, and we can complete our partial solution to a full solution. While one-stage robust combinatorial problems have been studied quite thoroughly over the past two decades, progress on two-stage problems has been only recent. I give an accessible overview to those who are new to robust optimization, and will also present some of these recent results to highlight where the current research is at.

Sebastian Siebertz (University of Bremen)

The notion of nowhere denseness was introduced by Nešetřil and Ossona de Mendez and provides a robust concept of uniform sparseness of graph classes. Nowhere dense graph classes generalize many familiar classes of sparse graphs such as the class of planar graphs and classes that exclude a fixed minor or topological minor. They admit several seemingly unrelated natural characterisations that lead to strong algorithmic applications. In particular, the model-checking problem for first-order logic is fixed-parameter tractable over these classes. In this tutorial I will give an introduction to the theory of nowhere dense graph classes with a focus on different characterisations and algorithmic applications.

Lukas Nölke (University of Bremen)

In the online matching problem on the line, n requests appear online and have to be matched, irrevocably and upon arrival, to a given set of servers, all on the real line. The goal is to minimize the sum of distances from the requests to their respective servers. Despite all research efforts, it remains as an intriguing open problem whether there exists a constant-competitive online algorithm. The best known online algorithm, due to Raghvendra (SOCG 2018), achieves a competitive factor of O(logn). There exists a matching a lower bound of Ω(logn) (Antoniadis et al., LATIN 2018) that holds for a quite large class of online algorithms, including all known deterministic algorithms in the literature. The best known general lower bound is 9+ε (Fuchs et al., TCS 2005). In this work, we consider the increasingly popular online recourse model and show that a constant competitive factor is possible if we allow to revoke online decisions to some extent. More precisely, we derive an O(1)-competitive algorithm for online minimum matching on the line that rematches each of the n requests at most O(logn) times. For special instances, where no more than one request appears between two servers, we obtain a substantially better result. We give a (1+ε)-competitive algorithm with Oε(1) recourse actions per request. Incidentally, the construction of the aforementioned Ω(logn) lower bound uses such instances. This is joint work with Nicole Megow.

Franziska Eberle (University of Bremen)

We consider a model for interval scheduling with a rechargeable resource. A set of interval jobs with starting times and end times needs to be scheduled on a set of identical machines. The goal is to maximize the total weight of the scheduled jobs. In the beginning, each machine is given an identical amount of a rechargeable resource. During the processing of a job a job-dependent amount of this resource is used. While no job is processed on a machine, its resource can charge with a linear rate. The resource of a machine is neither allowed to exceed the maximal capacity nor to become negative. This model generalizes scheduling with a (non-)renewable resource as the linear charging rate allows for trade-off between these two extremes. A typical application arises in the context of station based car sharing: scheduling bookings on e-cars with a rechargeable battery. For single machine scheduling, we show an optimal polynomial-time algorithm without weights and give a FPTAS in the presence of weights. The multiple machine case turns out to be considerably harder as the unit weight case already is NP-hard for a constant number of machines. Based on the optimal single-machine algorithm, we develop an e/(e-1)-approximation if the number of machines is part of the input.

Nicole Megow (University of Bremen)

Explorable uncertainty refers to settings where parts of the input data are initially unknown, but can be obtained at a certain cost using queries. An algorithm can make queries one by one until it has obtained sufficient information to solve a given problem. The challenge lies in balancing the cost for querying and the impact on the solution quality. In this talk, I give a short overview on recent work on explorable uncertainty for combinatorial optimization problems, particularly. I will speak about network design (minimum spanning tree) and scheduling.

Miriam Schlöter (ETH Zurich)

Lattice-free gradient polyhedra are optimality certificates for mixed integer convex optimization problems. We consider how to construct these polyhedra for problems with two integer variables. A classic result of Bell, Doignon, and Scarf implies that a lattice-free gradient polyhedron exists with at most four facets. We show how to build a sequence of (not necessarily lattice-free) gradient polyhedra, each of which has at most four facets, that converges in finitely many steps to lattice-free. This construction gives a gradient descent type algorithm for problems with two integer variables. We also show that approximate gradient information is enough to guarantee convergence for strongly convex functions.

Jens Schlöter (University of Bremen)

Time-critical tasks that need to be scheduled and executed are often modeled as directed acyclic graphs (DAGs) in order to describe precedence constraints and potential parallelization between their subtasks. However, when modeling real-world applications, the execution of a subtask might depend on the evaluation of a condition, like for example in if-then-else-statements of computer programs. Thus, only a subset of the subtasks will actually be executed. In order to model such control flow instructions, conditional DAGs can be used that extend the original DAG model by allowing the usage of so-called conditional nodes, where the semantics of a conditional node is that exactly one of its successors will be processed. Using conditional DAGs leads to additional challenges regarding schedulability tests and the calculation of worst-case execution times. Even if the scheduling algorithm that will be used to schedule the task is fixed, its execution time still depends on which successors of the conditional nodes will be chosen to be processed. In this talk, we will consider the case where a conditional DAG task and the scheduling algorithm to be used - in form of a priority order over all subtasks that potentially will be executed - are given, and the goal is to compute the worst-case execution time. Therefore, we will present complexity results as well as approximations for the described problem.

Victor Verdugo (London School of Economics)

In many situations finding the optimal revenue pricing policy requires to solve a hard optimisation problem. Posted price mechanism are simple and efficiently implementable. In this talk I'll show the connection between this type of mechanisms and optimal stopping rules for online selection problems, and how the guarantees from one problem to the other are preserved.

Ulrich Pferschy (University of Graz)

We consider the 0-1 Incremental Knapsack Problem (IKP) where the knapsack capacity grows over time. For each item one has to decide in which time period it should be packed, or whether it remains unpacked. Once an item is placed in the knapsack in a certain time period, it cannot be removed afterwards. The contribution of a packed item in each time period depends on its profit as well as on a time factor which reflects the importance of the period in the objective function. The problem calls for maximizing the weighted sum of the profits over the whole time horizon. In this work, we provide approximation results for IKP and its restricted variants. In some results, we rely on Linear Programming to derive approximation bounds. We also devise a Polynomial Time Approximation Scheme (PTAS) when the number of periods is considered as a constant. Under the mild and natural assumption that each item can be packed in the first time period, we discuss different approximation algorithms suited for any number of time periods and for the special case with two and three periods. Joint work with Federico Della Croce and Rosario Scatamacchia.

Jens Schlöter (University of Bremen)

Tasks that need to be scheduled and executed are often modeled as directed acyclic graphs (DAGs) in order to describe precedence constraints and potential parallelization between their subtasks. However, when modeling real-world tasks, the execution of a subtask might depend on the evaluation of a condition, like for example in if-then-else-statements of computer programs, and thus, only a subset of the subtasks will actually be executed. In order to model such control flow instructions, conditional DAGs can be used, that extend the original DAG model by allowing the usage of so-called conditional nodes, where the semantics of a conditional node is, that exactly one of its successors will be processed. Using conditional DAGs leads to additional challenges regarding schedulability tests and the calculation of the worst-case execution time of a modeled task. Even if the scheduling algorithm, that will be used to schedule the task, is fixed, its execution time still depends on which successors of the conditional nodes will be chosen to be processed. In this talk, we will consider the case, where a conditional DAG task and the scheduling algorithm to be used -- in form of a priority order of all subtasks that potentially will be executed -- are given, and the goal is to calculate the worst-case execution time. Therefore, we will present complexity results as well as approximations for the described problem.

Ruben Hoeksma (University of Bremen)

We consider a non-atomic network congestion game with incomplete information in which the incomplete information comes from randomness in the demand. We model this as a multi-commodity flow, where for each commodity it is uncertain if it will actually appear in the network. Wang, Doan, and Chen (2014), by considering an equilibrium notion in which users evaluate their expected cost using the full knowledge of the demand distribution, concluded that the price of anarchy of the game can be arbitrarily large. We consider the problem to be a Bayesian game, where users evaluate their cost conditioned on the fact that they themselves are present in the network. In contrast to the conclusion by Wang, Doan, and Chen (2014), we find that the known bounds on the price of anarchy for the deterministic demand game also apply to the Bayesian game, even if the probability of traveling for different commodities is arbitrarily correlated. Specifically, if all latency functions in the network are affine, the price of anarchy for the Bayesian game is 4/3. We also generalize these results to the class of smooth games. This talk is based on work done in collaboration with José Correa and Marc Schröder.

Łukasz Jeż (University of Wrocław)

In the online packet scheduling problem with deadlines (PacketScheduling, for short), the goal is to schedule transmissions of packets that arrive over time in a network switch and need to be sent across a link. Each packet p has a deadline dp, representing its urgency, and a non-negative weight wp, that represents its priority. Only one packet can be transmitted in any time slot, so, if the system is overloaded, some packets will inevitably miss their deadlines and be dropped. In this scenario, the natural objective is to compute a transmission schedule that maximizes the total weight of packets which are successfully transmitted. The problem is inherently online, with the scheduling decisions made without the knowledge of future packet arrivals. The central problem concerning PacketScheduling, that has been a subject of intensive study since 2001, is to determine the optimal competitive ratio of online algorithms, namely the worst-case ratio between the optimum total weight of a schedule (computed by an offline algorithm) and the weight of a schedule computed by a (deterministic) online algorithm. We solve this open problem by presenting a ϕ-competitive online algorithm for PacketScheduling (where ϕ≈1.618 is the golden ratio), matching the previously established lower bound.

Bertrand Simon (University of Bremen)

Scientific applications are usually described as directed acyclic graphs, where nodes represent tasks and edges represent dependencies between tasks. For some applications, such as the multifrontal method of sparse matrix factorization, this graph is a tree: each task produces a single output data, used by a single task (its parent in the tree). We focus on the case when the data manipulated by tasks have a large size, which is especially the case in the multifrontal method. To process a task, both its inputs and its output must fit in the main memory. Moreover, output results of tasks have to be stored between their production and their use by the parent task. It may therefore happen, during an execution, that not all data fit together in memory. In particular, this is the case if the total available memory is smaller than the minimum memory required to process the whole tree. In such a case, some data have to be temporarily written to disk and read afterwards. These Input/Output (I/O) operations are very expensive; hence, the need to minimize them. We revisit this open problem in this paper. Specifically, our goal is to minimize the total volume of I/O while processing a given task tree. We first formalize and generalize known results, then prove that existing solutions can be arbitrarily worse than optimal. Finally, we propose a novel heuristic algorithm, based on the optimal tree traversal for memory minimization. We demonstrate good performance of this new heuristic through simulations on both synthetic trees and realistic trees built from actual sparse matrices.

Claus Gwiggner (Universität Hamburg)

Scheduling with uncertain release dates has received little attention in the literature, although transportation, production and supply chain problems often exhibit such properties. In this paper, a sequence that minimizes the expected total weighted completion time under uncertain release dates is determined. The approach is a two-stage model, where an a priori sequence is established in the first stage and a scheduling operation is performed in the second stage. The major difficulty is an exponential number of the joint realization of the release dates. This difficulty is resolved by a Markov model of the optimal solution of the second stage. It allows to solve instances to optimality with millions of scenarios. Moreover, the optimal policy turns out to shift a single task at the beginning of a sequence that is built according to the Weighted Shortest Expected Processing Times rule, depending on the variance of the release dates.

Thomas Weise (Hefei University )

In the fields of heuristic optimization and machine learning, experimentation is the way to assess the performance of an algorithm setup and the hardness of problems. Good experimentation is complicated. Most algorithms in the domain are anytime algorithms, meaning they can improve their approximation quality over time. This means that one algorithm may initially perform better than another one but converge to worse solutions in the end. Instead of single final results, the whole runtime behavior of algorithms needs to be compared (and runtime may be measured in multiple ways). We do not just want to know which algorithm performs best and which problem is the hardest ― a researcher wants to know why. We introduce a process which can 1) automatically model the progress of algorithm setups on different problem instances based on data collected in experiments, 2) use these models to discover clusters of algorithm (or problem instance) behaviors, and 3) propose reasons why a certain algorithm setup (or problem instance) belongs to a certain algorithm (or problem instance) behavior cluster. These high-level conclusions are presented in form of decision trees relating algorithm parameters (or instance features) to cluster ids. We emphasize the duality of analyzing algorithm setups and problem instances. Our process is implemented as open source software and tested in two case studies, on the Maximum Satisfiability Problem and the Traveling Salesman Problem. Besides its basic application to raw experimental data, yielding clusters and explanations of “quantitative” algorithm behavior, our process also allows for “qualitative” conclusions by feeding it with data which is normalized with problem features or algorithm parameters. It can also be applied recursively, e.g., to further investigate the behavior of the algorithms in the cluster with the best-performing setups on the problem instances belonging to the cluster of hardest instances. Both use cases are investigated in the case studies.

Leen Stougie (CWI Amsterdam)

In the last years the vertex enumeration problem of polyhedra has seen a revival in the study of metabolic networks, which increased the demand for efficient vertex enumeration algorithms for high-dimensional polyhedra given by equalities and inequalities. The complexity of enumeration of vertices of polytopes (bounded polyhedral) is a famous open problem in discrete and computational geometry. In this lecture we do not solve this problem, but present a type of fixed parameter total polynomial time result. We apply the concept of branch-decomposition to the vertex enumeration problem of polyhedra P={x:Ax=b;x≥0}. For that purpose, we introduce the concept of k-module and show how it relates to the separators of the linear matroid generated by the columns of A. This then translates structural properties of the matroidal branch-decomposition to the context of polyhedra. We then use this to present a total polynomial time algorithm for enumerating all vertices of polytopes P for which the branch-width of the linear matroid generated by the columns of A is bounded by a constant k. If time permits we will show a truly fixed parameter tractable result, with time being an exponential function of the parameters only times a polynomial function of the input/output size, if on top of bounded branchwidth we also assume that all square submatrices of A have bounded determinant. This paper is joint work with Arne Reimers.

Klaus Heeger (University of Bonn)

We consider the 2-vertex-connected spanning subgraph problem. This problem asks, given a 2-vertex-connected graph, to find a 2-vertex-connected spanning subgraph with minimum number of edges. This classical network optimization problem is known to be APX-hard. We give a 10/7-approximation algorithm for this problem, improving on the 3/2-approximation from Garg, Vempala and Singla. The algorithm is similar to the 17/12-approximation algorithm for the related 2-egde-connected spanning subgraph problem, and works by carefully modifying an open ear-decomposition with minimum number of even ears.

Franziska Eberle (University of Bremen)

We study a fundamental online job admission problem where jobs with processing times and deadlines arrive online over time at their release dates, and the task is to determine a preemptive single-server schedule which maximizes the number of completed jobs. To circumvent known impossibility results, we make a standard slackness assumption by which the feasible time window for scheduling a job is at least 1+ε times its processing time, for some ε>0. We analyze the competitive ratio of algorithms depending on the slackness parameter ε. In this paper we quantify the impact that different provider commitment requirements have on the performance of online algorithms. Studying this impact is important, for example, in modern cloud-services, where customers do not want last-minute rejections of critical tasks and request an early-enough provider-side commitment to completing admitted jobs. Our main contribution is one universal algorithmic framework for online job admission both with and without commitments. For scheduling without commitment, we prove that our algorithm with a competitive ratio of O(1ε) is the best possible (deterministic) for this problem. For different commitment models, we give the first non-trivial performance bounds. In the setting where commitment decisions must be made before a job's slack becomes less than a δ-fraction of its size, we prove a competitive ratio of O(ε⋅((ε−δ)δ2)−1), for 0<δ<ε. When a provider must commit upon starting a job, our bound is O(ε−2). A stronger commitment model, where a provider has to commit at job arrival, does not admit any bounded competitive ratio even for randomized algorithms. Finally, we observe that for scheduling with commitment the restriction to the unweighted'' throughput model is essential; if jobs have individual weights, we rule out any bounded competitive ratio (for deterministic algorithms).

Adrian Fessel (University of Bremen)

Network topology, as well as dynamic processes such as flow, synchronization of oscillators, or wave activity that take place on a given network topology, have been extensively studied. In self-organizing networks, topology and dynamic processes interact in a unique way: the network adapts its structure based on local activity, enabling the emergence of global dynamic properties on an optimized topology. Working with Physarum polycephalum as an experimentally accessible model for an adaptive transportation system, we seek to formalize topological development into a sequence of discrete events intended to serve as basis for studying the interaction of flow and topology. We focus on reorganization of P. polycephalum networks after fragmentation, a process occurring in a prominent percolation transition. Unlike in most traditional percolation processes, system size in P. polycephalum is not fixed and requires a model that allows for dynamical changes in size. The theoretical description follows a master equation with parameters obtained from statistical analysis of experimental data. Computational investigation of the model recreates the dynamics of the topological phase transition and enables characterization of finite size effects and critical exponents. Comparing the influences of system growth and fusion within the system reveals a significant shift of the transition when system growth dominates.

Sigrid Knust (Osnabrück University)

A synchronous flow shop is a variant of a non-preemptive permutation flow shop where transfers of jobs from one machine to the next take place at the same time. The processing is organized in synchronized cycles which means that in a cycle all current jobs start at the same time on the corresponding machines. Then all jobs are processed and have to wait until the last one is finished. Afterwards, all jobs are moved to the next machine simultaneously. As a consequence, the processing time of a cycle is determined by the maximum processing time of the operations contained in it. Furthermore, only permutation schedules are feasible, i.e., the jobs have to be processed in the same order on all machines. The goal is to find a permutation of the jobs such that the makespan is minimized. Motivated by a practical application, we investigate special cases where the processing times of the cycles are determined by a subset of so-called dominating machines. Furthermore, we consider problems with additional resources and setup times.

Bertrand Simon (ENS de Lyon)

With the increasing complexity of modern computing platforms, many scientific applications now rely on runtime schedulers. Most of these frameworks model an application as a directed acyclic graph of tasks, where nodes represent tasks and edges represent dependences between tasks. In my thesis, I focused on three theoretical challenges concerning task graph scheduling: exploiting the inherent parallelism of tasks, efficiently using accelerators, and coping with a limited memory. In this talk, I will detail two projects lead in this context. The first one targets a platform composed of two types of processors, such as CPUs and GPUs. The processing time of each task depends on the type of processors it is allocated to; this decision being one the main scheduling choices. We study the makespan minimization problem in an online setting, where the graph is gradually discovered, which increases the difficulty of the decision. In a second project, we focus on reducing the memory consumption of an application in order to avoid expensive memory transfers. Specifically, we aim at preventing dynamic schedulers from running out of memory on a shared-memory platform. These schedulers generally start multiple tasks in parallel, which may lead to a memory shortage. We study how to modify the task graph in order to restrict this parallelism while maintaining performance.

Viktor Bindewald (TU Dortmund)

The Bipartite Matching Problem is a well-studied combinatorial optimization problem with numerous practical applications. In real-life optimization problems are modeled using data based on forecasts and estimates which leads to uncertain problem parameters. One way to take this fact into account is provided by the framework of robust optimization. While most robust variants of the Bipartite Matching Problem considered in the literature address cost uncertainty, this talk deals with a new robust problem, where the structure of the underlying bipartite graph is uncertain. In our problem, an additional part of the input is a list of failure scenarios described by a subset of edges of the graph that may fail simultaneously. If a scenario emerges, the corresponding edges are deleted from the graph. In the robust problem the goal is to find a minimum-cost set of edges containing a perfect matching in every scenario, i.e., a robust solution survives any possible incident. We will discuss the computational complexity and present asymptotically tight approximation algorithms. Additionally we address the following related question: Is it easier to harden an existent infrastructure rather than designing a new one from scratch? In our setting this results in the question how to robustify a given perfect matching by adding new edges to the ambient graph. For this problem we will present hardness and algorithmic results.

Franziska Eberle (University of Bremen)

We address a fundamental stochastic scheduling problem where jobs with uncertain processing times must be scheduled non-preemptively on m identical parallel machines. We are given a set J of n jobs, where each job j∈J is modeled by a random variable Pj with known distribution. The actual realization of a processing time becomes known only by executing a job. The goal is to find a policy Π that decides for any point in time which jobs to schedule such as to minimize the expected total completion time, ∑j∈JE[CΠj], where CjΠ denotes the completion time of job j under policy Π. The scheduling problem can be stated in the standard three-field notation as P||E[∑jCj]. We consider so-called index policies that assign the same priority to jobs with the same probability distribution and schedule jobs one after the other on the first machine that becomes available. Job-based index policies do not consider the number of jobs or the number of machines. We rule out distribution-independent approximation factors for job-based index policies. Somewhat surprisingly this lower bound is obtained already for very simple instances with only two types of jobs, identical deterministic jobs and a set of stochastic jobs that all follow the same Bernoulli distribution. For this class of instances we also give a policy that is an O(m)-approximation. This talk is based on work together with Felix Fischer, Nicole Megow, and Jannik Matuschke.

Katrin Casel (Trier University)

For most clustering problems, the quality of a solution is usually assessed with respect to a given pairwise distance on the input objects. Approximate solutions for such tasks often rely on this distance to be a metric. But what happens if this property does not hold? Especially for computing a clustering such that each cluster has a certain minimum cardinality k>2 (as required for anonymisation or balanced facility location problems) this becomes problematic. For example, with the objectives to minimise the maximum radius or diameter of all clusters, there exists no polynomial-time constant factor approximation if the triangle inequality is violated while there exists a 2-approximation for metric distances. This talk introduces methods to resolve or at least soften this effect of non-metric distances by devising particular strategies to deal with violations of the triangle inequality (conflicts). With the use of parameterised algorithmics it turns out that, if the number of such conflicts is not too large, constant factor approximations can still be computed efficiently.

Ruben Hoeksma (University of Bremen)

TSP with neighborhoods is a long-studied problem where we are asked to find the shortest tour visiting a given set of regions (=neighborhoods). When the neighborhoods are lines in two dimensions, the problem is efficiently solvable. For lines in three dimensions, the problem is NP-hard and this is generally true in higher dimensions with regions that are low dimensional subspaces. The question we try to answer in this talk concerns the setting where the regions are hyperplanes in three or more dimensions. The complexity of this problem is open (no hardness results are known) and, until now, only a recent (2d)-approximation algorithm was known. In this talk, we make a step forward in answering this sixteen year old open question by designing a PTAS. In a high-level view, we will see the underlying ideas, including a (new) sparsification technique for polytopes and a linear program that computes the shortest tour.

This talk is based on work together with Antonios Antoniadis, Krzysztof Fleszar, and Kevin Schewior.

Syamantak Das (University of Bremen)

In the Survivable Network Design Problem (SNDP), we are given an undirected graph G together with costs on edges or vertices and the demand function k on source-sink demand pairs (si,ti,ki), where each si,ti are vertices in the graph and ki is a natural number. In the edge-connectivity setting (EC-SNDP), we are interested in the minimum cost subgraph H of G in which every demand pair (si,ti) is connected via ki edge-disjoint paths. These problems are natural generalizations of Steiner tree and Steiner forest, which capture the fault-tolerant requirements in networks.

In general, these problems are NP-hard, and even restrictive special cases of SNDP post non-trivial challenges. As such, SNDP has drawn a lot of attention from researchers for the past decades. Special cases of SNDP, like Steiner Tree, Steiner Forest, 2-Edge connected subgraph etc. has been studied extensively on many special classes of graphs, e.g. planar graphs, Euclidean metrics, and bounded treewidth graphs.

In this talk, we shall discuss some recent results on exact algorithms for the general SNDP parameterized by the treewidth of the graph. In particular, we shall demonstrate a fairly general dynamic programming framework on the basis of the tree-decomposition of the graph which can be applied to solve a bunch of related network design problems.

Thomas Schneider (University of Bremen)

Modal logic and its syntactic variants are widely used in knowledge representation and verification. Applications in those areas require modal logics with a careful balance of the rivalling requirements "high expressivity" and "low computational complexity". The talk will give an overview of our contributions to "taming" expressive temporal, description, and hybrid logics, which consist in identifying fragments with good computational complexity and studying modularity. These contributions fall into three groups: (1) systematic studies of the complexity of sub-Boolean fragments of expressive modal and description logics, (2) theoretic and practical studies of module extraction and modularisation of ontologies, and (3) a study of the complexity of lightweight combinations of temporal and description logics.

The talk will be accessible to a general audience with a background in (theoretical) computer science.

Rolf Drechsler (University of Bremen)

New challenges need to be faced during modeling, design, implementation, verification and test of Cyber‐Physical Systems (CPS). For the design of hardware and software in this context, new approaches need to be developed, taking into account also non‐functional requirements. This talk gives an overview on challenges and recent developments with a special focus on verification and correctness.

Tobias Mömke (University of Bremen)

We study the problem of finding a tour of n points in which every edge is long. More precisely, we wish to find a tour that visits every point exactly once, maximizing the length of the shortest edge in the tour. The problem is known as Maximum Scatter TSP, and was introduced by Arkin et al. (SODA 1997), motivated by applications in manufacturing and medical imaging. Arkin et al. gave a 0.5-approximation for the metric version of the problem and showed that this is the best possible ratio achievable in polynomial time (assuming P≠NP). Arkin et al. raised the question of whether a better approximation ratio can be obtained in the Euclidean plane. We answer this question in the affirmative in a more general setting, by giving a (1−ε)-approximation algorithm for d-dimensional doubling metrics, with running time ~O(n3+2O(Klog(K))), where K≤(13/ε)d. As a corollary we obtain (i) an efficient polynomial-time approximation scheme (EPTAS) for all constant dimensions d, (ii) a polynomial-time approximation scheme (PTAS) for dimension d=(loglog(n))/c, for a sufficiently large constant c, and (iii) a PTAS for constant d and ε=Ω(1/loglog(n)). Furthermore, we show the dependence on d in our approximation scheme to be essentially optimal, unless Satisfiability can be solved in subexponential time.

Ruben Hoeksma (University of Bremen)

In this talk, we discuss the following setting. A gambler is presented with a set of random variables that come to him in a random order. When the variable arrives, the gambler sees its value and decides to take it or go on to the next. If the gambler takes it, he gets an amount of money equal to the value of the variable and the game ends, otherwise he is presented the next random variable. The gamblers aim is to maximize the expected amount of money he receives. This setting is a classic example of an optimal stopping problem and is known as the Prophet Secretary Problem. For this problem we analyze two different threshold strategies. One non-adaptive strategy, where the thresholds are chosen before any of the variables arrive and before the order is known, and one adaptive strategy, where thresholds are chosen based on the variables that have been rejected prior. For the non-adaptive strategy we show that the gambler can, in expectation, get at least 1−1/e times the expected maximum value among the random variables (i.e. the value a prophet would receive). We show that no non-adaptive strategy can do better, even for i.i.d. random variable. For the adaptive strategy, we analyze only the i.i.d. setting and show that this improves the gamblers expectation to 0.745 times the expected maximum. This results settles a conjecture by Kertz from 1986. Finally, we discuss the implications for posted price mechanisms, a commonly used mechanism for selling goods. We show that the results for threshold strategies imply similar bounds for comparisons of posted price mechanisms with optimal auctions.

Dmitry Feichtner-Kozlov (University of Bremen)

Motivated by higher-dimensional coboundary expansion we define a certain family of graphs which are called Cheeger graphs. Little is known at present about these graphs. We will make a short self-contained introduction into the subject and formulate several open purely graph-theoretic problems.

Antonios Antoniadis (Max-Planck-Institut für Informatik)

We investigate online convex optimization with switching costs (OCO; Lin et al., INFOCOM 2011), a natural online problem arising when rightsizing data centers: A server initially located at p0 on the real line is presented with an online sequence of non-negative convex functions f1,f2,…,fn:R→R+. In response to each function fi, the server moves to a new position pi on the real line, resulting in cost |pi−pi−1|+(fipi). The total cost is the sum of costs of all steps. One is interested in designing competitive algorithms.

In this talk, we solve the problem in the classical sense: We give a lower bound of 2 on the competitive ratio of any possibly randomized online algorithm, matching the competitive ratio of previously known deterministic online algorithms (Andrew et al., COLT 2013/arXiv 2015; Bansal et al., APPROX 2015). It has been previously conjectured that (2−ε)-competitive algorithms for some ε>0 (Bansal et al., APPROX 2015).

Joint work with Kevin Schewior, to appear in WAOA 2017.

Peter Kling (Universität Hamburg)

We consider n jobs that have to be scheduled on m processors. The processors share a common, continuously divisible resource. In each discrete time step, the scheduler can change the assignment of the resource to the processors. Jobs come with a processing volume and a resource requirement. When a processor currently has a share R of the resource and works on a job with processing volume p and resource requirement r, exactly min{R/r,1} units of the job's processing volume are processed. That is, doubling the resource also doubles the speed a job is processed with, but this speedup is capped. Migration and preemption are not allowed, and the goal is to minimize the makespan.

In this talk, I present an 1+O(1/m)-approximation algorithm if all jobs have unit work and an 2+O(1/m)-approximation algorithm for jobs with arbitrary workload. The algorithms employ a sliding window technique and are quite simple and intuitive. This problem generalizes bin packing with cardinality constraints and splittable items, for which our results improve the approximation ratio achieved by a fast and simple algorithm from previously 2−O(1/m) to 1+O(1/m).

This is joint work with Alexander Mäcker, Sören Riechers, and Alexander Skopalik.

Jan Hackfeld (TU Berlin)

We consider the problem of exploring an undirected and initially unknown graph by a set of cooperative agents. The vertices of the graph are unlabeled, and the edges incident to a vertex have locally distinct labels.In this setting, it is known that for one agent Θ(log n) bits of memory are necessary and sufficient to explore any graph with at most n vertices. We show that O(log log n) cooperative agents with only constant memory can explore any graph with at most n vertices in polynomial time. We match this result with a lower bound showing that the number of agents needed for exploring any graph with at most n vertices is already Ω(log log n) when we allow each agent to have at most O(log n 1−ε ) bits of memory for some ε > 0. In particular, this implies that Ω(log log n) agents with constant memory are necessary for exploration.

Markus Jablonka (University of Bremen)

Submodular functions are set functions that capture the natural "diminishing returns" property and can be seen as a discrete analogy to convex functions. The problem of maximizing a submodular function is as follows: Given a set E of elements and a set function f:2E→Q (typically through an oracle), we want to determine a subset S of E with maximum value f(S). This is an NP-hard problem for which several approximation algorithms are known. These algorithms are either randomized or adaptive, in the sense that they choose which function values to look up after having already seen some others, and in this way adapt to the function. In this talk we discuss a non-adaptive, or oblivious, deterministic approach. All sets for which the value of the function will be queried will be chosen from the beginning as a list. The set of this list with the highest function value will then be returned as an approximation of the problem. The focus of the talk will be choosing such a list and determining the quality of this solution.

Syamantak Das (University of Bremen)

This will be a talk covering my general research interests, approximation and online algorithms for scheduling and network design.

First, I will speak about online scheduling problems. We propose a new model on top of the classical competitive analysis that we believe is useful in dealing with uncertainty in the input data. We show that one can design simple online algorithms with good guarantees if one is allowed to reject, that is, not process a tiny fraction of the input. We propose this in the light of the fact that such an algorithm is not possible for optimizing certain classical scheduling objectives like makespan minimization or minimizing the (weighted) maximum flow-time.

Joint work with: Anamitra Roy Choudhury, Naveen Garg and Amit Kumar.

In the second part, I shall talk about network design problems. We give improved approximation guarantees for a classical network design problem called the Group Steiner Tree, when the input graph has a special structure. In particular, we prove a generic reduction of a problem instance on such graphs to a problem instance on trees that allows us to carry out the desired improvement. The reduction may be of independent interest.

Joint work with: Parinya Chalermsook, Bundit Laekhanukit and Daniel Vaz.

Nicole Megow (University of Bremen)

Minimizing resource usage is a key to achieving economic, environmental, or societal goals. We consider the fundamental problem of minimizing the number of machines that is necessary for feasibly scheduling preemptive jobs with release dates and hard deadlines. We consider the online variant of this problem in which every job becomes known to the online algorithm only at its release date. We present new competitive algorithms for this problem. We also discuss the power of job migration and show somewhat surprising strong lower bounds on the best possible performance guarantee.

Joint work with: Lin Chen and Kevin Schewior.

Marek Adamczyk (University of Bremen)

Consider the following variant of the set cover problem. We are given a universe U={1,...,n} and a collection of subsets C={S1,...,Sm} where Si⊆U. For every element u∈U we need to find a set ϕ(u)∈C such that u∈ϕ(u). Once we construct and fix the mapping ϕ:U↦C a subset X⊆U of the universe is revealed, and we need to cover all elements from X with exactly ϕ(X):=⋃u∈Xϕ(u). The goal is to find a mapping such that the cover ϕ(X) is as cheap as possible. This is an example of a universal problem where the solution has to be created before the actual instance to deal with is revealed. Such problems appear naturally in some settings when we need to optimize under uncertainty and it may be actually too expensive to begin finding a good solution once the input starts being revealed. A rich body of work was devoted to investigate such problems under the regime of worst case analysis, i.e., when we measure how good the solution is by looking at the worst-case ratio: universal solution for a given instance vs optimum solution for the same instance. As the universal solution is significantly more constrained, it is typical that such a worst-case ratio is actually quite big. One way to give a viewpoint on the problem that would be less vulnerable to such extreme worst-cases is to assume that the instance, for which we will have to create a solution, will be drawn randomly from some probability distribution. In this case one wants to minimize the expected value of the ratio: universal solution vs optimum solution. Here the bounds obtained are indeed smaller than when we compare to the worst-case ratio. But even in this case we still compare apples to oranges as no universal solution is able to construct the optimum solution for every possible instance. What if we would compare our approximate universal solution against an optimal universal solution that obeys the same rules as we do? In the talk we will see that under this viewpoint, but still in the stochastic variant, we can indeed obtain better bounds than in the expected ratio model. For the Set Cover problem --- on which the main ideas and characteristics of the model will be presented --- we will see how one can obtain Hn approximation, which matches the approximation ratio from the classic deterministic setup. Moreover, this result holds for all possible probability distributions over U that have a polynomially large carrier, while all previous results pertained to a model in which elements were sampled independently. The result is based on rounding a proper configuration IP that captures the optimal universal solution, and using tools from submodular optimization.

Joint work with: Fabrizio Grandoni, Stefano Leonardi and Michał Włodarczyk.

Felix Fischer (University of Glasgow)

The area of mechanism design is concerned with the development of algorithms that produce good outcomes given inputs from self-interested individuals. Much of mechanism design theory is dominated by the Vickrey-Clarke-Groves (VCG) mechanism, which makes it optimal for each individual to provide its true input. In the real world the VCG mechanism is used with surprising rarity, and some reasons for this are known. We identify another property of the mechanism that may be problematic in practice, a relative lack of robustness to inaccuracies in the choice of its parameters.

Joint work with Paul Dütting and David C. Parkes.