in "Graphs and Combinatorial Optimization: from Theory to Applications - CTW2020 Proceedings", C. Gentile, G. Stecca, P. Ventura (Eds.), 335—347, AIRO-Springer series, 2021

**Abstract:** In [T9] the first MIP
exact formulation was provided that describes the convex hull of the solutions
satisfying all the standard operational constraints for the thermal units:
minimum up- and down-time, minimum and maximum power output, ramp (including
start-up and shut-down) limits, general history-dependent start-up costs, and
nonlinear convex power production costs. That formulation contains a polynomial,
but large, number of variables and constraints. We present two new formulations
with fewer variables defined on the shut-down period and computationally test
the trade-off between reduced size and possibly weaker bounds.

**Keywords:** Unit Commitment problem, Ramp Constraints,
MIP Formulations, Dynamic Programming, Convex Costs

The published paper is available at https://doi.org/10.1007/978-3-030-63072-0_26. A Technical Report version is available here.

in "Graphs and Combinatorial Optimization: from Theory to Applications - CTW2020 Proceedings", C. Gentile, G. Stecca, P. Ventura (Eds.), 277—291, AIRO-Springer series, 2021

**Abstract:** Mobile cellular networks play a pivotal role in emerging
Internet of Things (IoT) applications, such as vehicular collision alerts,
malfunctioning alerts in Industry-4.0 manufacturing plants, periodic
distribution of coordination information for swarming robots or platooning
vehicles, etc. All these applications are characterized by the need of routing
messages within a given local area (geographic proximity) with constraints
about both timeliness and reliability (i.e., probability of reception). This
paper presents a Non-Convex Mixed-Integer Nonlinear Programming model for a
routing problem with probabilistic constraints on a wireless network. We
propose an exact approach consisting of a branch-and-bound framework based on a
novel Lagrangian decomposition to derive lower bounds. Preliminary experimental
results indicate that the proposed algorithm is competitive with
state-of-the-art general-purpose solvers, and can provide better solutions than
existing highly tailored ad-hoc heuristics to this problem.

**Keywords:** Internet of Things, Routing, Broadcast, Chance-Constrained
Optimization, Mixed-Integer Nonlinear Programs, Lagrangian relaxation, bundle
methods, Branch-and-Bound

The published paper is available at https://doi.org/10.1007/978-3-030-63072-0_22. A Technical Report version is available here.

*Computers and Operations Research* **129**, 105189, 2021

**Abstract:** We investigate a multiperiod drayage problem in which
customers request transportation services over several days, possibly leaving
the carrier some flexibility to change service periods. We compare three
approaches for the problem: a path-based model with all feasible routes, a
“Price-and-Branch” algorithm in which the pricing is formulated as a collection
of shortest path problems in a cunningly constructed acyclic network, and a
compact arc-flow formulation based on this network. The experiments shows that
the latter formulation is the most efficient, and can solve to optimality
instances of real-world size (and beyond) in time compatible with typical
operational constraints. Also, the models allow us to assess that limited
amounts of flexibility from customers can significantly improve routing costs
for the carrier while decreasing customers' cost as well.

**Keywords:** Drayage, Modeling, Reformulations

The published paper is available at https://doi.org/10.1016/j.cor.2020.105189, or via this ShareLink. A Technical Report version is available here.

*Discrete Applied Mathematics*, to appear, 2021

**Abstract:** Classical Lagrangian relaxations for the multicommodity
capacitated fixed-charge network design problem are the so-called flow and
knapsack relaxations, where the resulting Lagrangian subproblems decompose by
commodities and by arcs, respectively. We introduce node-based Lagrangian
relaxations, where the resulting Lagrangian subproblem decomposes by nodes. We
show that the Lagrangian dual bounds of these relaxations improve upon the
linear programming relaxation bound, known to be equal to the Lagrangian dual
bounds for the flow and knapsack relaxations. We also develop a Lagrangian
matheuristic to compute upper bounds. The computational results on a set of
benchmark instances show that the Lagrangian matheuristic is competitive with
the state-of-the-art heuristics from the literature.

**Keywords:** Network design, Lagrangian relaxation, column generation,
matheuristic

The published paper is available at https://doi.org/10.1016/j.dam.2020.12.024, or via this ShareLink. A Technical Report version is available here.

*International Journal of Electrical Power and Energy Systems*,
**127**, 106661, 2021

**Abstract:** The high penetration of intermittent renewable generation
has prompted the development of Stochastic Hydrothermal Unit Commitment (SHUC)
models, which are more difficult to be solved than their thermal-based
counterparts due to hydro generation constraints and inflow uncertainties.
This work presents a SHUC model applied in centralized cost-based dispatch.
The SHUC is represented by a two-stage stochastic model, formulated as a
large-scale mixed-binary linear programming problem. The solution strategy is
divided into two steps. The first step is the Lagrangian Relaxation (LR)
approach, which is applied to solve the dual problem and generate a lower
bound for SHUC. The second step is given by a Primal Recovery where we use the
solution of the LR dual problem with heuristics based on Benders’ Decomposition
to obtain the primal-feasible solution. Both steps benefit from each other,
exchanging information over the iterative process. We assess our approach in
terms of the quality of the solutions and running times on space and scenario
LR decompositions. The computational instances use various power systems,
considering the different configuration of plants (capacity and number of
units). The results show the advantage of our primal recovery technique
compared to solving the problem via MILP solver. This is true already for the
deterministic case, and the advantage grows as the problem’s size (number of
plants and/or scenarios) does. The space decomposition provides better
solutions, although scenario one provides better lower bounds, but the main
idea is to encourage researchers to explore LR decompositions and heuristics
in other relevant problems.

**Keywords:** Stochastic Hydrothermal Unit Commitment, Lagrangian
Relaxation, Primal Recovery Technique, Mixed-binary linear program, Power
system planning.

The published paper is available at https://doi.org/10.1016/j.ijepes.2020.106661, or using this ShareLink. You can also download a Technical Report version.

*Management Science* **66**(7), 3051—3068, 2020

**Abstract:** The Cell Suppression Problem (CSP) is a challenging
Mixed-Integer Linear Problem arising in statistical tabular data protection.
Medium sized instances of CSP involve thousands of binary variables and
million of continuous variables and constraints. However, CSP has the
typical structure that allows application of the renowned Benders'
decomposition method: once the ``complicating'' binary variables are fixed,
the problem decomposes into a large set of linear subproblems on the "easy"
continuous ones. This allows to project away the easy variables, reducing to
a master problem in the complicating ones where the value functions of the
subproblems are approximated with the standard cutting-plane approach. Hence,
Benders' decomposition suffers from the same drawbacks of the cutting-plane
method, i.e., oscillation and slow convergence, compounded with the fact
that the master problem is combinatorial. To overcome this drawback we
present a stabilized Benders decomposition whose master is restricted to a
neighborhood of successful candidates by local branching constraints, which
are dynamically adjusted, and even dropped, during the iterations. Our
experiments with randomly generated and real-world CSP instances with up
to 3600 binary variables, 90M continuous variables and 15M inequality
constraints show that our approach is competitive with both the current
state-of-the-art (cutting-plane-based) code for cell suppression, and the
Benders implementation in CPLEX 12.7. In some instances, stabilized Benders
is able to quickly provide a very good solution in less than one minute,
while the other approaches were not able to find any feasible solution in
one hour.

**Keywords:** Benders' Decomposition, Mixed-Integer Linear Problems,
Stabilization, Local Branching, Large-scale Optimization, Statistical Tabular
Data Protection, Cell Suppression Problem

The final publication can be found at https://doi.org/10.1287/mnsc.2019.3341. Due to INFORMS provisions on Reuse Permissions, you can also download the Author Accepted Manuscript here.

in *Numerical Nonsmooth Optimization: State of the Art Algorithms*,
A.M. Bagirov, M. Gaudioso, N. Karmitsa, M. Mäkelä, S. Taheri
(Eds.), 61—116, Springer, 2020

**Abstract:** We review the basic ideas underlying the vast family of
algorithms for nonsmooth convex optimization known as "bundle methods". In a
nutshell, these approaches are based on constructing models of the function,
but lack of continuity of first-order information implies that these models
cannot be trusted, not even close to an optimum. Therefore, many different
forms of stabilization have been proposed to try to avoid being led to areas
where the model is so inaccurate as to result in almost useless steps. In the
development of these methods, duality arguments are useful, if not outright
necessary, to better analyze the behaviour of the algorithms. Also, in many
relevant applications the function at hand is itself a dual one, so that
duality allows to map back algorithmic concepts and results into a "primal
space" where they can be exploited; in turn, structure in that space can be
exploited to improve the algorithms' behaviour, e.g. by developing better
models. We present an updated picture of the many developments around the
basic idea along at least three different axes: form of the stabilization,
form of the model, and approximate evaluation of the function.

**Keywords:** Nonsmooth optimization, bundle methods, stabilization,
decomposition, Lagrangian relaxation, duality, inexact
function evaluation, incremental approach, survey

The final publication can be found at https://doi.org/10.1007/978-3-030-34910-3_3. A Technical Report version can be downloaded here.

in *Analytics for the Sharing Economy: Mathematics, Engineering and
Business Perspectives*, E. Crisostomi, B. Ghaddar, F. Häusler,
J. Naoum-Sawaya, G. Russo, R. Shorten (Eds.), Springer, 2020

**Abstract:** Effectively sharing resources requires solving complex
decision problems. This requires constructing a mathematical model of the
underlying system, and then applying appropriate mathematical methods to find
an optimal solution of the model, which is ultimately translated into actual
decisions. The development of mathematical tools for solving optimization
problems dates back to Newton and Leibniz, but it has tremendously accelerated
since the advent of digital computers. Today, optimization is an
inter-disciplinary subject, lying at the interface between management science,
computer science, mathematics and engineering. This chapter offers an
introduction to the main theoretical and software tools that are nowadays
available to practitioners to solve the kind of optimization problems that are
more likely to be encountered in the context of this book. Using, as a case
study, a simplified version of the *bike sharing problem*, we guide the
reader through the discussion of modelling and algorithmic issues,
concentrating on methods for solving optimization problems to proven
optimality.

**Keywords:** Optimization Methods, Combinatorial Optimization,
Integer Programming, Bike Sharing

The final publication can be found at https://doi.org/10.1007/978-3-030-35032-1. A Technical Report version can be downloaded here.

*Lecture Notes on Computer Science – Proceedings of the
International Symposium on Combinatorial Optiomization ISCO 2020*,
to appear, 2020

**Abstract:** Under mild assumptions that are satisfied for many network
design models, we show that the Lagrangian dual obtained by relaxing the flow
constraints is what we call "quasi-separable." This property implies that the
Dantzig-Wolfe (DW) reformulation of the Lagrangian dual exhibits two sets of
convex combination constraints, one in terms of the design variables and the
other in terms of the flow variables, the latter being linked to the design
variables. We compare the quasi-separable DW reformulation to the standard
disaggregated DW reformulation. We illustrate the concepts on a particular
case, the budget-constrained multicommodity capacitated unsplittable network
design problem.

**Keywords:** Lagrangian relaxation, Dantzig-Wolfe reformulations,
network design

*Mathematics of Operations Research* **45**(1), 15—33, 2020

**Abstract:** We study the problem of decomposing the Hessian matrix of
a Mixed-Integer Convex Quadratic Program into the sum of positive semidefinite
2x2 matrices. Solving this problem enables the use of Perspective Reformulation
techniques for obtaining strong lower bounds for MICQPs with semi-continuous
variables but a nonseparable objective function. An explicit formula is derived
for constructing 2x2 decompositions when the underlying matrix is Weakly Scaled
Diagonally Dominant, and necessary and sufficient conditions are given for the
decomposition to be unique. For matrices lying outside this class, two exact
SDP approaches and an efficient heuristic are developed for finding approximate
decompositions. We present preliminary results on the bound strength of a 2x2
Perspective Reformulation for the Portfolio Optimization Problem, showing that
for some classes of instances the use of 2x2 matrices can significantly improve
the quality of the bound w.r.t. the best previously known approach, although at
a possibly high computational cost.

**Keywords:** Mixed-Integer Quadratic Programming, Matrix Decomposition,
Scaled Diagonal Dominance, Semicontinuous variables,
Portfolio Optimization

The published paper is available at http://dx.doi.org/10.1287/moor.2018.0969, but thanks to INFORMS provisions on Rights and Permissions, you can download the author accepted manuscript (AAM).

in *Lecture Notes in Computer Science*,
6^{th} International Conference on Machine Learning,
Optimization and Data science - LOD 2020, G. Nicosia, P.M. Pardalos,
G. Giuffrida, R. Umeton and V. Sciacca (Eds.), Springer-Verlag, 2020

**Abstract:** We propose a methodology, based on machine learning and
optimization, for selecting a solver configuration for a given instance.
First, we employ a set of solved instances and configurations in order to
learn a performance function of the solver. Secondly, we formulate a
mixed-integer nonlinear program where the objective/constraints explicitly
encode the learnt information, and which we solve, upon the arrival of an
unknown instance, to find the best solver configuration for that instance,
based on the performance function. The main novelty of our approach lies in
the fact that the configuration set search problem is formulated as a
mathematical program, which allows us to a) enforce hard dependence and
compatibility constraints on the configurations, and b) solve it efficiently
with off-the-shelf optimization tools.

**Keywords:** Automatic Algorithm Configuration, Mathematical
Programming, Machine Learning, Optimization Solver Configuration,
Hydro Unit Committment

Thanks to the provisions of Lecture Notes in Computer Science Consent to Publish, an author-created version can be dowloaded here.

in *Lecture Notes in Computer Science* **12096**, Learning and
Intelligent Optimization - LION 2020, I.S. Kotsireas and P.M. Pardalos
(Eds.), 377—389, Springer, 2020

**Abstract:** We discuss the issue of finding a good mathematical
programming solver configuration for a particular instance of a given problem,
and we propose a two-phase approach to solve it. In the first phase we learn
the relationships between the instance, the configuration and the performance
of the configured solver on the given instance. A specific difficulty of
learning a good solver configuration is that parameter settings may not all be
independent; this requires enforcing (hard) constraints, something that many
widely used supervised learning methods cannot natively achieve. We tackle
this issue in the second phase of our approach, where we use the learnt
information to construct and solve an optimization problem having an explicit
representation of the dependency/consistency constraints on the configuration
parameter settings. We discuss computational results for two different
instantiations of this approach on a unit commitment problem arising in the
short-term planning of hydro valleys. We use logistic regression as the
supervised learning methodology and consider {\tt CPLEX} as the solver of
interest.

**Keywords:** Mathematical Programming, Optimization Solver Configuration,
Hydro Unit Commitment

The published paper is available at https://doi.org/10.1007/978-3-030-53552-0_34. Thanks to the provisions of Lecture Notes in Computer Science Consent to Publish, an author-created version can be dowloaded here.

*ZIB Report* **20-08**, 2020

**Abstract:** Project plan4res
involves the development of a modular framework for the modeling and analysis
of energy system strategies at the European level. It will include models
describing the investment and operation decisions for a wide variety of
technologies related to electricity and non-electricity energy sectors across
generation, consumption, transmission and distribution. The modularity of the
framework allows for detailed modelling of major areas of energy systems that
can help stakeholders from different backgrounds to focus on specific topics
related to the energy landscape in Europe and to receive relevant outputs and
insights tailored to their needs. The current paper presents a qualitative
description of key concepts and methods of the novel modular optimization
framework and provides insights into the corresponding energy landscape.

**Keywords:** Energy Systems Analysis and Optimization, Simulation and
Planning Under Uncertainty, Renewables Integration,
Sector Coupling; Climate Change Impact

*AIRO Springer Series* **Vol. 4**, Springer International
Publishing, ISBN 978-3-030-57441-3 / 978-3-030-57442-0, 2020

**Abstract:** This book presents a collection of energy production and
distribution problems identified by the members of the COST Action TD1207
"Mathematical Optimization in the Decision Support Systems for Efficient and
Robust Energy Networks". The aim of the COST Action was to coordinate the
efforts of the experts in different fields, from academia and industry, in
developing innovative tools for quantitative decision making, and apply them
to the efficient and robust design and management of energy networks. The work
covers three main goals: 1) to be a nimble while comprehensive resource of several real life business problems with a categorized set of pointers to many
relevant prescriptive problems for energy systems; 2) to offer a balanced mix
of scientific and industrial views; 3) to evolve over time in a flexible and
dynamic way giving, from time to time, a more scientific or industrial - or
even political in a broad sense - weighed perspective. It is addressed to
researchers and professionals working in the field.

Available on the Springer webiste. However, you can freely peruse the original Wiki of the COST Action TD1207 upon which the book is based.

*Technical Report*, **IASI R. 19-03**, 2019

**Abstract:** Recently, Guan, Pan, and Zohu presented a MIP model for the
thermal single-unit commitment claiming that provides an integer feasible
solution for any convex cost function. In this note we provide a counterexample
to this statement and we produce evidence that the perspective function is
needed for this aim.

**Keywords:**Unit Commitment, Mixed Integer Nonlinear Programming,
Perspective Function, Exact formulation

Download here or from Optimization Online.

*Optmization Online* **7426**, 2019

**Abstract:** The Unit Commitment (UC) problem in electrical power
production requires to optimally operate a set of power generation units over
a short time horizon (one day to a week). Operational constraints of each unit
depend on its type (e.g., thermal, hydro, nuclear, . . . ), and can be rather
complex; typical ones concern minimum and maximum power output, minimum up-
and down-time, start-up and shut-down limits, ramp-up and ramp-down limits.
Also, the objective function is often nonlinear. Thus, even the Single-Unit
Commitment (1UC) problem, in which only one unit is present, has a rich
combinatorial structure. In this work we present the first MINLP formulation
that describes the convex hull of the feasible solutions of (1UC) comprising
all the above constraints, and Second-Order Cone representable power generation
costs. The new formulation has a polynomial number of both variables and
constraints, and it is based on the efficient Dynamic Programming algorithm
proposed in [A16], together with the
perspective reformulation technique proposed in
[A17]. We then analyze the effect of using
it to develop tight formulations for the more general (UC). Since the
formulation, despite being polynomial-size, is rather large, we also propose
two new formulations, based on partial aggregations of variables, with
different trade-offs between quality of the obtained bound and cost of the
solving the corresponding continuous relaxation. Our results show that
navigating these trade-offs may lead to improved performances for the partial
enumeration approach used to solve the problem.

**Keywords:**Unit Commitment problem, Ramp Constraints,
MIP Formulations, Dynamic Programming, Convex Costs

Download from Optimization Online.

*Transportation Research Part B* **127**, 99—124, 2019

**Abstract:** Planning a public transportation system is a complex
process, which is usually broken down in several phases, performed in sequence.
Most often, the trips required to cover a service with the desired frequency
(headway) are decided early on, while the vehicles needed to cover these trips
are determined at a later stage. This potentially leads to requiring a larger
number of vehicles (and, therefore, drivers) that would be possible if the two
decisions were performed simultaneously. We propose a multicommodity-flow type
model for integrated timetabling and vehicle scheduling. Since the model is
large-scale and cannot be solved by off-the-shelf tools with the efficiency
required by planners, we propose a diving-type matheuristic approach for the
problem. We report on the efficiency and effectiveness of two variants of the
proposed approach, differing on how the continuous relaxation of the problem is
solved, to tackle real-world instances of bus transport planning problem
originating from customers of *M.A.I.O.R.*, a leading company providing
services and advanced decision-support systems to public transport authorities
and operators. The results show that the approach can be used to aid even
experienced planners in either obtaining better solutions, or obtaining them
faster and with less effort, or both.

**Keywords:** Public Transport, Timetabling, Vehicle Scheduling,
Integrated Approach, Matheuristic

The final publication can be found at https://doi.org/10.1016/j.trb.2019.07.004. You can also use this Share Link, or download a Technical Report Version.

in *A View of Operations Research Applications in Italy, 2018*,
M. Dell'Amico, M. Gaudioso and G. Stecca (Eds.), 207—217,
AIRO Springer Series, 2019

**Abstract:** Planning of services and resources for public transport
systems is a complex process. In practice the planning is usually decomposed
into several stages, where service levels are decided first, while vehicles
and drivers are only optimized later on. We describe the new TTD.XT tool,
developed by M.A.I.O.R. S.r.l. in collaboration with the University of Pisa,
that improves the efficient utilization of expensive resources by
*simultaneously* optimizing timetabling and vehicle scheduling. We
quickly present the underlying mathematical model and (math-heuristic)
algorithm, and compare the solutions constructed by the tool for real-world
bus transport planning instances for three major Italian cities with those
produced by the standard sequential decision process; the former are
significantly better than the latter both in terms of objective function value
and when judged by experts of the field.

**Keywords:** Timetabling, Vehicle Scheduling, Integrated Approach,
Matheuristic

The final publication can be found at https://doi.org/10.1007/978-3-030-25842-9_16.

*Optimization Letters* **13**(4), 673—684, 2019

**Abstract:** The Sequential Convex MINLP (SC-MINLP) technique is a
solution method for nonconvex Mixed-Integer NonLinear Problems where the
nonconvexities are separable. It is based on solving a sequence of convex
MINLPs which trade a better and better relaxation of the nonconvex part of the
problem with the introduction of more and more piecewise-linear nonconvex
terms, and therefore binary variables. The convex MINLPs are obtained by
partitioning the domain of each separable nonconvex term in the intervals in
which it is convex and those in which it is concave. In the former, the term is
left in its original form, while in the latter it is piecewise-linearized.
Since each interval corresponds to a semi-continuous variable, we propose to
modify the convex terms using the Perspective Reformulation technique to
strengthen the bounds. We show by means of experimental results on different
classes of instances that doing so significantly decreases the solution time
of the convex MINLPs, which is the most time consuming part of the approach,
and has therefore the potential to improving the overall effectiveness of
SC-MINLP.

**Keywords:** Global Optimization, NonConvex Separable Functions,
Sequential Convex MINLP Technique, Perspective Reformulation

The final publication can be found at https://doi.org/10.007/s11590-018-1360-9. Thanks to the provisions of Springer's Copyright Transfer Statement, you can download the author-created version of the accepted manuscript here. You can also use this Springer Nature Sharing link.

*SIAM Journal on Computing* **48**(5), 1603—1642, 2019

**Abstract:** Since the seminal work by Shannon, theoreticians focused on
designing compressors targeted to minimize the output size without sacrificing
much the compression/decompression efficiency. On the other hand, software
engineers deployed several heuristics to implement compressors aimed at trading
compressed space versus compression/decompression efficiency in order to match
their application needs. In this paper we fill this gap by introducing the
bicriteria data-compression problem that asks to determine the shortest
compressed file that can be decompressed in a given time bound. Then, inspired
by modern data-storage applications, we instantiate the problem onto the family
of LZ-based compressors (such as Snappy and LZ4) and solve it by combining in a
novel and efficient way optimization techniques, string-matching data
structures and shortest-path algorithms over properly (bi-)weighted graphs
derived from the data-compression problem at hand. An extensive set of
experiments complements our theoretical achievements by showing that the
proposed algorithmic solution is very competitive with respect to
state-of-the-art highly-engineered compressors.

**Keywords:** Approximation algorithm, Data compression, Lempel-Ziv 77,
Pareto optimization

The published paper is available at https://doi.org/10.1137/17M1121457. Due to the provisions of SIAM Consent to Publish, it can also be downloaded here.

*Mathematical Programming Computation* **11**(2), 237—265,
2019

**Abstract:** This paper describes a new instance library for Quadratic
Programming (QP), i.e., the family of continuous and (mixed)-integer
optimization problems where the objective function and/or the constraints are
quadratic. QP is a very diverse class of problems, comprising sub-classes
ranging from trivial to undecidable. This diversity is reflected in the variety
of QP solution methods, ranging from entirely combinatorial approaches to
completely continuous algorithms, including many methods for which both aspects
are fundamental. Selecting a set of instances of QP that is at the same time
not overwhelmingly onerous but sufficiently challenging for the different,
interested communities is therefore important. We propose a simple taxonomy
for QP instances leading to a systematic problem selection mechanism. We then
briefly survey the field of QP, giving an overview of theory, methods and
solvers. Finally, we describe how the library was put together, and detail its
final contents.

**Keywords:** Instance Library, Quadratic Programming

The final publication can be found at https://doi.org/10.1016/10.1007/s12532-018-0147-4, and it can be viewed via this Springer Nature SharedIt Link. Thanks to Springer's provisions about self-archiving, you can also download the Author's accepted manuscript version here.

*Annals of Operations Research* **271**(1), 11—85, 2018

**Abstract:** The Unit Commitment problem in energy management aims at
finding the optimal production schedule of a set of generation units, while
meeting various system-wide constraints. It has always been a large-scale,
non-convex, difficult problem, especially in view of the fact that, due to
operational requirements, it has to be solved in an unreasonably small time
for its size. Recently, growing renewable energy shares have strongly
increased the level of uncertainty in the system, making the (ideal) Unit
Commitment model a large-scale, non-convex and \emph{uncertain} (stochastic,
robust, chance-constrained) program. We provide a survey of the literature on
methods for the Uncertain Unit Commitment problem, in all its variants. We
start with a review of the main contributions on solution methods for the
deterministic versions of the problem, focussing on those based on
mathematical programming techniques that are more relevant for the uncertain
versions of the problem. We then present and categorize the approaches to the
latter, while providing entry points to the relevant literature on
optimization under uncertainty. This is an updated version of the paper
[A46]; this version has over 170 more
citations, most of which appeared in the last three years, proving how fast
the literature on uncertain Unit Commitment evolves, and therefore the
interest in this subject.

**Keywords:** Unit Commitment, Uncertainty, Large-Scale Optimization,
Survey

The published paper is available at https://doi.org/10.1007/s10479-018-3003-z, and it can be viewed via this Springer Nature SharedIt Link.You can also download a Technical Report version.

*SIAM Journal on Optimization* **28**(1), 379–410, 2018

**Abstract:** We propose a family of proximal bundle methods for
minimizing sum-structured convex nondifferentiable functions which require two
slightly uncommon assumptions, that are satisfied in many relevant applications:
Lipschitz continuity of the functions and oracles which also produce *upper
estimates* on the function values. In exchange, the methods: i) use *upper
models* of the functions that allow to estimate function values at points
where the oracle has not been called; ii) provide the oracles with more
information about when the function computation can be interrupted, possibly
diminishing their cost; iii) allow to skip oracle calls entirely for some of the
component functions, not only at "null steps" but also at
"serious steps"; iv) provide explicit and reliable a-posteriori
estimates of the quality of the obtained solutions; v) work with all possible
combinations of different assumptions on how the oracles deal with not being
able to compute the function with arbitrary accuracy. We also discuss the
introduction of constraints (or, more generally, of easy components) and use
of (partly) aggregated models.

**Keywords:** Nonsmooth optimization, bundle methods, inexact function
evaluation, incremental approach

The published paper is available at https://doi.org/10.1137/16M1089897. Due to the provisions of SIAM Copyright Assignment Agreement, it can also be downloaded here.

*Electronic Notes in Discrete Mathematics* **69**, 45–52
(Proceedings of the
9^{th}
joint EURO/ALIO International Conference 2018 on Applied Combinatorial
Optimization, Bologna, June 25-27), 2018

**Abstract:** Delay-Constrained Routing (DCR) problems require to route a
new flow in a computer network subject to worst-case end-to-end delay
guarantees. The delay of a packet flow has three components, one of which is the
"queueing delay", that depends on the scheduling algorithm implemented by the
routers of the network. When flows are not independent of each other, i.e.,
admitting a new flow changes the delay of the existing ones, admission control
policies are necessary to ensure that existing flows do not become
latency-unfeasible. It has been recently shown that admission control runs
contrary to the usual objective function employed in these models, i.e.,
minimization of the reserved rates, significantly worsening network performance.
In this paper we investigate the phenomenon and propose a heuristic way to
overcome the problem.

**Keywords:** Routing problems, maximum delay constraints, scheduling
algorithms, admission control, Mixed-Integer NonLinear
Programming, Second-Order Cone Programs, Robustness

The final publication can be found at https://doi.org/10.1016/j.endm.2018.07.007. Thanks to Elsevier's provisions about Scholarly Sharing, you can download the Author's accepted manuscript version here. You can also use this Share Link.

*Optimization Letters* **12**(1), 43–53, 2018

**Abstract:** We present and computationally evaluate a variant of the
fast gradient method by Nesterov that is capable of exploiting information,
even if approximate, about the optimal value of the problem. This information
is available in some applications, among which the computation of bounds for
hard integer programs. We show that dynamically changing the smoothness
parameter of the algorithm using this information results in a better
convergence profile of the algorithm in practice.

**Keywords:** Fast Gradient Method, Lagrangian Relaxation

The final publication is available at Springer via http://dx.doi.org/10.1007/s11590-017-1168-z. Thanks to the provisions of Springer's Copyright Transfer Statement, you can download the Author's accepted manuscript version here. The paper is also available on SharedIt.

*Electronic Notes in Discrete Mathematics* **69**, 237–244
(Proceedings of the
9^{th}
joint EURO/ALIO International Conference 2018 on Applied Combinatorial
Optimization, Bologna, June 25-27), 2018

**Abstract:** This paper investigates a drayage problem, which is
motivated by a carrier providing *door-to-door* freight transportation
services by trucks and containers. The trucks carry one or two containers to
ship container loads from a port to importers and from exporters to the same
port. The problem is modeled by a set covering formulation with integer
variables. We propose a Price-and-Branch algorithm for this problem, in which
the pricing problem is a pair of shortest path problems in a suitable graph.
The algorithm can determine near-optimal solutions in a short time.

**Keywords:** Price-and-Branch, Drayage, Vehicle Routing Problem,
Set Covering

The final publication can be found at https://doi.org/10.1016/j.endm.2018.07.031. Thanks to Elsevier's provisions about Scholarly Sharing, you can download the Author's accepted manuscript version here. You can also use this Share Link.

*Journal of Industrial Engineering International* **14**(4),
665—676, 2018

**Abstract:** This paper addresses a drayage problem, which is motivated
by the case study of a real carrier. Its trucks carry one or two containers
from a port to importers and from exporters to the port. Since up to four
customers can be served in each route, we propose a set covering formulation
for this problem where all possible routes are enumerated. This model can be
efficiently solved to optimality by a commercial solver, significantly
outperforming a previously proposed node-arc formulation. Moreover, we show
that a different distribution policy, which results in an enlarged set of
feasible routes, may bring significant savings to the carrier w.r.t. the one
currently employed.

**Keywords:** Drayage, Vehicle Routing Problem, Street-turns,
Set Covering

The final publication can be freely downloaded from http://dx.doi.org/10.1007/s40092-018-0256-8.

*Journal of Network and Computer Applications* **106**,
1–16, 2018

**Abstract:** Coordinated Scheduling (CS) is used to mitigate inter-cell
interference in present (4G) and future (5G) cellular networks. We show that
coordination of a cluster of nodes can be formulated as an optimization problem,
i.e., placing the Resource Blocks (RB) in each node’s subframe with the least
possible overlapping with neighboring nodes. We provide a clever formulation,
which allows optimal solutions to be computed in clusters of ten nodes, and
algorithms that compute good suboptimal solutions for clusters of tens of nodes,
fast enough for a network to respond to traffic changes in real time. This
allows us to assess the relationship between the scale at which CS is performed
and its benefits in terms of network energy efficiency and cell-edge user rate.
Our results, obtained using realistic power, radiation and
Signal-to-Interference-and-Noise-Ratio (SINR) models, show that optimal CS
allows a significant protection of cell-edge users. Moreover, this goes
hand-in-hand with a reduction in the number of allocated RBs, which in turn
allows an operator to reduce its energy consumption. Both benefits actually
increase with the size of the clusters. The evaluation is carried out in both
a 4G and a foreseen 5G setting, using different power models, system bandwidths
and SINR-to-datarat mappings.

**Keywords:** Coordinated Scheduling, energy-efficiency, cellular
networks, inter-cell interference

The final publication is available at https://doi.org/10.1016/j.jnca.2018.01.007.

*Sixth International Workshop on Cloud Technologies and Energy Efficiency
in Mobile Communication Networks* (CLEEN 2018), Porto, June 3 2018

**Abstract:** Cellular network nodes should be dynamically switched
on/off based on the load requirements of the network, to save power and
minimize inter-cell interference. This should be done keeping into account
global interference effects, which requires a centralized approach. In this
paper, we present an architecture, realized within the Flex5GWare EU project,
that manages a large-scale cellular network, switching on and off nodes based
on load requirements and context data. We describe the architectural framework
and the optimization model that is used to decide the activity state of the
nodes. We present simulation results showing that the framework adapts to the
minimum power level based on the cell loads.

**Keywords:** Energy-efficiency, Mobile networks, Optimization

*IEEE Transactions on Sustainable Energy* **9**(3),
1307–1317, 2018

**Abstract:** Solving very-large-scale optimization problems frequently
require to decompose them in smaller subproblems, that are iteratively solved
to produce useful information. One such approach is the Lagrangian Relaxation
(LR), a general technique that leads to many different decomposition schemes.
The LR produces a lower bound of the objective function and useful information
for heuristics aimed at constructing feasible primal solutions. In this paper,
we compare the main LR strategies used so far for Stochastic Hydrothermal Unit
Commitment problems, where uncertainty mainly concerns water availability in
reservoirs and demand (weather conditions). The problem is customarily modeled
as a two-stage mixed-integer optimization problem. We compare different
decomposition strategies (unit and scenario schemes) in terms of quality of
produced lower bound and running time. The schemes are assessed with various
hydrothermal systems, considering different configuration of power plants, in
terms of capacity and number of units.

**Keywords:** Lagrangian Relaxation, Mixed-Integer Linear Programming,
Hydrothermal Stochastic Unit Commitment

The final publication is available at http://dx.doi.org/10.1109/TSTE.2017.2781908. Thanks to the provisions of IEEE Copyright Transfer Agreement, you can download the accepted manuscript version here.

*EURO Journal on Computational Optimization* **5**(1-2), 1–3,
2017

The final publication is available at Springer via https://doi.org/10.1007/s13675-017-0083-5. Thanks to the provisions of Springer's Copyright Transfer Statement, you can download the Author's accepted manuscript version here. The paper is also available on SharedIt.

*Operations Research Letters*, **45**, 519‐524, 2017

**Abstract:** We propose an improvement of the Approximated Projected
Perspective Reformulation (AP^{2}R) of
[A48] for dealing with constraints linking
the binary variables.The new approach solves the Perspective Reformulation
(PR) once, and then use the corresponding dual information to reformulate the
problem prior to applying AP^{2}R, thereby combining the
root bound quality of the PR with the reduced relaxation computing time of
AP^{2}R. Computational results for the
cardinality-constrained Mean-Variance portfolio optimization problem show that
the new approach is competitive with state-of-the-art ones.

**Keywords:** Mixed-Integer NonLinear Problems, Semi-continuous
Variables, Perspective Reformulation, Projection,
Lagrangian Relaxation, Portfolio Optimization

The final publication is available at http://dx.doi.org/10.1016/j.orl.2017.08.001. Thanks to Elsevier's provisions about Article Sharing, you can download the Author's accepted manuscript version here.

*Computers & Operations Research* **81**, 67–77, 2017

**Abstract:** As shown in [A45], the
problem of routing a flow subject to a worst-case end-to-end delay constraint
in a packed-based network can be formulated as a Mixed-Integer Second-Order
Cone Program, and solved with general-purpose tools in real time on realistic
instances. However, that result only holds for one particular class of packet
schedulers, *Strictly Rate-Proportional* ones, and implicitly considering
each link to be fully loaded, so that the *reserved rate* of a flow
coincides with its *guaranteed rate*. These assumptions make latency
expressions simpler, and enforce perfect isolation between flows, i.e.,
admitting a new flow cannot increase the delay of existing ones. Other
commonplace schedulers both yield more complex latency formulæ and do not
enforce flow isolation. Furthermore, the delay actually depends on the
*guaranteed* rate of the flow, which can be significantly larger than the
*reserved* rate if the network is unloaded. In this paper we extend the
result to other classes of schedulers and to a more accurate representation of
the latency, showing that, even when admission control needs to be factored in,
the problem is still efficiently solvable for realistic instances, provided
that the right modeling choices are made.

**Keywords:** Routing problems, maximum delay constraints,
scheduling algorithms, admission control,
Mixed-Integer NonLinear Programming,
Second-Order Cone Programs, Perspective Reformulation

The published paper is available at http://dx.doi.org/10.1016/j.cor.2016.12.009, but you can download a Technical Report version.

*Computer Communications* **103**, 104–115, 2017

**Abstract:** In a network where weighted fair-queueing schedulers are
used at each link, worst-case end-to-end delays can be inferred from per-link
rate reservations. Therefore, it is also possible to compute
resource-constrained paths that meet target delay constraints, *and*
optimize some key performance metrics (e.g., minimize the overall reserved
rate, maximize the remaining capacity at bottleneck links, etc.). Despite the
large amount of literature on QoS routing appeared since the mid '90s, few
papers so far have discussed solving such path computation problem at
optimality in general settings. In this paper, we formulate and solve the
*optimal path computation and resource allocation problem* assuming
different types of weighted fair-queueing schedulers in the network. We show
that, depending on the scheduling algorithm, routing a new flow may or may not
affect the performance of existing flows; hence, explicit admission control
constraints may be required to ensure that existing flows still meet their
deadline afterwards. Yet, for the relevant schedulers proposed in the
literature the problem can be formulated as a Mixed-Integer Second-Order Cone
problem (MI-SOCP), and can be solved at optimality in split-second times even
in fairly large networks.

**Keywords:** QoS Routing, Worst-Case Delay, Weighted Fair-Queueing,
Admission Control, Optimization

The final publication is available at http://dx.doi.org/10.1016/j.comcom.2016.09.006, but a Technical Report version can be freely downloaded.

*Mathematical Programming Computation* **9**(4), 573–604,
2017

**Abstract:** Subgradient methods (SM) have long been the preferred way
to solve the large-scale Nondifferentiable Optimization problems arising from
the solution of Lagrangian Duals (LD) of Integer Programs (IP). Although other
methods can have better convergence rate in practice, SM have certain
advantages that may make them competitive under the right conditions.
Furthermore, SM have significantly progressed in recent years, and new versions
have been proposed with better theoretical and practical performances in some
applications. We computationally evaluate a large class of SM in order to
assess if these improvements carry over to the IP setting. For this we build a
unified scheme that covers many of the SM proposed in the literature,
comprised some often overlooked features like projection and dynamic generation
of variables. We fine-tune the many algorithmic parameters of the resulting
large class of SM, and we test them on two different Lagrangian duals of the
Fixed-Charge Multicommodity Capacitated Network Design problem, in order to
assess the impact of the characteristics of the problem on the optimal
algorithmic choices. Our results show that, if extensive tuning is performed,
SM can be competitive with more sophisticated approaches when the tolerance
required for solution is not too tight, which is the case when solving LDs of
IPs.

**Keywords:** Subgradient methods, Nondifferentiable Optimization,
Computational analysis, Lagrangian relaxation,
Multicommodity Network Design

The final publication is available at Springer via http://dx.doi.org/10.1007/s12532-017-0120-7. Thanks to the provisions of Springer's Copyright Transfer Statement, you can download the Author's accepted manuscript version here. The paper is also available on SharedIt.

*Fifth International Workshop on Cloud Technologies and Energy Efficiency
in Mobile Communication Networks* (CLEEN 2017), Turin, June 22 2017

**Abstract:**This paper describes the software architecture and the
implementation of a fully operational testbed that demonstrates the benefits of
flexible, dynamic resource allocation with virtualized LTE-A nodes. The testbed
embodies and specializes the general software architecture devised within the
Flex5Gware EU project, and focuses on two *intelligent programs*: the
first one is a Global Scheduler, that coordinates radio resource allocation
among interfering nodes; the second one is a Global Power Manager, which
switches on/off nodes based on their expected and measured load over a period
of minutes. The software framework is written using open-source software, and
includes fast, scalable optimization algorithms at both components. Moreover, it
supports virtualized BaseBand Units, implemented using OpenAir-Interface, that
can run on physical and virtual machines. We present the results obtained via
on-field measurements, that demonstrate the feasibility and benefits of our
approach.

**Keywords:** Coordinated Scheduling, power saving, software framework,
Cloud-RAN, testbed, OpenAirInterface, Flex5Gware

The final publication is available at http://dx.doi.org/10.23919/CLEEN.2017.8045910. Thanks to the provisions of IEEE Copyright Transfer Agreement, you can download the accepted manuscript version here.

*IEEE International Conference on Communications – Workshop on
Flexible Networks* (IEEE ICC2017 - FlexNets 2017), Paris, May 21-25 2017

**Abstract:** Using Coordinated Scheduling (CS), eNodeBs in a cellular
network dynamically agree on which Resource Blocks (not) to use, so as to
reduce the interference, especially for cell-edge users. This paper describes a
software framework that allows dynamic CS to occur among a relatively large
number of nodes, as part of a more general framework of network management
devised within the Flex5Gware project. The benefits of dynamic CS, in terms of
spectrum efficiency and resource saving, are illustrated by means of simulation
and with live measurements on a prototype implementation using virtualized
eNodeBs.

**Keywords:** Coordinated Scheduling, CoMP, Virtual RAN

The final publication is available at http://dx.doi.org/10.1109/ICCW.2017.7962645. Thanks to the provisions of IEEE Copyright Transfer Agreement, you can download the accepted manuscript version here.

*Fifth International Workshop on Cloud Technologies and Energy Efficiency
in Mobile Communication Networks* (CLEEN 2017), Turin, June 22 2017

**Abstract:** Coordinated Scheduling (CS) is one of the main techniques
to control inter-cell interference in present (4G) and future (5G) cellular
networks. We show that coordination of a *cluster* of nodes can be
formulated as an optimization problem, i.e., placing the Resource Blocks in
each nodes' subframe with the least possible overlapping with neighboring
nodes. We provide a clever formulation, which allow optimal solutions to be
computed in clusters of ten nodes, and algorithms that compute good suboptimal
solutions for clusters of several tens of nodes, fast enough for a network to
respond to traffic changes in real time. This allows us to assess the
relationship between the scale at which CS is performed and its benefits in
terms of network energy efficiency and cell-edge user rate. Our results show
that optimal CS allows a significant protection of cell-edge users. Moreover,
this goes hand-in-hand with a significant reduction in the number of allocated
Resource Blocks, which in turn allows an operator to reduce its energy
consumption. Both benefits actually *increase* with the size of the
clusters.

**Keywords:** CoMP-CS, energy-efficiency, scheduling, mobile networks,
optimization, simulation

The final publication is available at http://dx.doi.org/10.23919/CLEEN.2017.8045909 . Thanks to the provisions of IEEE Copyright Transfer Agreement, you can download the accepted manuscript version here.

*Computational Optimization and Applications* **65**(3),
637–669, 2016

**Abstract:** We explore modifications of the standard cutting-plane
approach for minimizing a convex nondifferentiable function, given by an
oracle, over a combinatorial set, which is the basis of the celebrated
(generalized) Benders' decomposition approach. Specifically, we combine
stabilization–in two ways: via a trust region in the
*L*_{1} norm, or via a level constraint–and
inexact function computation (solution of the subproblems). Managing both
features simultaneously requires a nontrivial convergence analysis; we provide
it under very weak assumptions on the handling of the two parameters (target
and accuracy) controlling the *informative on-demand inexact oracle*
corresponding to the subproblem, strengthening earlier know results. This
yields new versions of Benders' decomposition, whose numerical performance are
assessed on a class of hybrid robust and chance-constrained problems that
involve a random variable with an underlying discrete distribution, are convex
in the decision variable, but have neither separable nor linear probabilistic
constraints. The numerical results show that the approach has potential,
especially for instances that are difficult to solve with standard techniques.

**Keywords:** Benders' decomposition, Chance-Constrained Problems,
Mixed-Integer Optimization, Nonsmooth Optimization,
Stabilization, Inexact Function Computation

The final publication is available at Springer via http://dx.doi.org/10.1007/s10589-016-9851-z. Thanks to the provisions of Springer's Copyright Transfer Statement, you can download the Author's accepted manuscript version here.

*Computational Optimization and Applications* **63**(3),
705–735, 2016

**Abstract:** The Perspective Reformulation (PR) of a Mixed-Integer
NonLinear Program with semi-continuous variables is obtained by replacing each
term in the (separable) objective function with its convex envelope. Solving
the corresponding continuous relaxation requires appropriate techniques. Under
some rather restrictive assumptions, the *Projected PR*
(P^{^2}R) can be defined where the integer variables are
eliminated by projecting the solution set onto the space of the continuous
variables only. This approach produces a simple piecewise-convex problem with
the same structure as the original one; however, this prevents the use of
general-purpose solvers, in that some variables are then only implicitly
represented in the formulation. We show how to construct an *Approximated*
Projected PR (AP^{^2}R) whereby the projected
formulation is "lifted" back to the original variable space, with each integer
variable expressing one piece of the obtained piecewise-convex function. In
some cases, this produces a reformulation of the original problem with exactly
the same size and structure as the standard continuous relaxation, but
providing substantially improved bounds. In the process we also substantially
extend the approach beyond the original P^{^2}R
development by relaxing the requirement that the objective function be
quadratic and the left endpoint of the domain of the variables be non-negative.
While the AP^{^2}R bound can be weaker than that of the
PR, this approach can be applied in many more cases and allows direct use of
off-the-shelf MINLP software; this is shown to be competitive with previously
proposed approaches in some applications.

**Keywords:** Mixed-Integer NonLinear Problems, Semicontinuous Variables,
Perspective Reformulation, Projection

The final publication is available at Springer via http://dx.doi.org/10.1007/s10589-015-9787-8. Thanks to the provisions of Springer's Copyright Transfer Statement, you can download the Author's accepted manuscript version here.

in *Lecture Notes of the Institute for Computer Sciences, Social
Informatics and Telecommunications Engineering* **172**,
Cognitive Radio Oriented Wireless Networks, D. Noguet, K. Moessner
and J. Palicot (Eds.), 754–766, Springer-Verlag, 2016

**Abstract:** Research on 4G/5G cellular networks is progressively
shifting to paradigms that involve virtualization and cloud computing. Within
this context, prototyping assumes a growing importance as a performance
evaluation method, besides large-scale simulations, as it allows one to
evaluate the computational requirements of the system. Both approaches share
the need for a structured and statistically sound experiment management, with
the goal of reducing errors in both planning and measurement collection. In
this paper, we describe how we solve the problem with OpenAirInterface (OAI),
an open-source system for prototyping 4/5G cellular networks. We show how to
integrate a sound, validated software, namely ns2-measure, with OAI, so as to
enable harvesting samples of arbitrary metrics in a structured way, and we
describe scripts that al-low structured experiment management, such as
launching a parametric simula-tion campaign and harvesting its results in a
plot-ready format. We complete the paper by demonstrating some advantages
brought about by our modifications.

**Keywords:** LTE-A, Cloud-RAN, OpenAirInterface, performance
evaluation, experimentation, ns2-measure

The publication is available at Springer via http://dx.doi.org/10.1007/978-3-319-40352-6_62.

*Applied Mathematics and Computation* **270**, 193–215,
2015

**Abstract:** We consider multigrid type techniques for the numerical
solution of large linear systems whose coefficient matrices show the structure
of (weighted) graph Laplacian operators. We combine ad hoc coarser-grid
operators with iterative techniques used as smoothers. Empirical tests suggest
that the most effective smoothers have to be of Krylov type with subgraph
preconditioners, while the projectors, which define the coarser-grid operators,
have to be designed for maintaining as much as possible the graph structure of
the projected matrix at the inner levels. The main theoretical contribution of
the paper is the characterization of necessary and sufficient conditions for
preserving the graph structure. In this framework it is possible to explain why
the classical projectors inherited from differential equations are good in the
differential context and why they may behave unsatisfactorily for unstructured
graphs. Furthermore, we report and discuss several numerical experiments,
showing that our approach is effective even in very difficult cases where the
known approaches are rather slow. As a conclusion, the main advantage of the
proposed approach is the robustness, since our multigrid type technique behaves
uniformly well in all cases, without requiring either the setting or the
knowledge of critical parameters, as it happens when using the best known
preconditioned Krylov methods.

**Keywords:** Graph Matrices, Multigrid,
Conditioning and Preconditioning

The final publication is available at http://dx.doi.org/10.1016/j.amc.2015.08.033. Thanks to the provisions about Scholarly Sharing, you can also download the Author's accepted manuscript version here.

*Journal of Optimization Theory and Applications* **164**(3),
1051–1077, 2015

**Abstract:** Routing real-time traffic with maximum packet delay in
contemporary telecommunication networks requires not only choosing a path,
but also reserving transmission capacity along its arcs, as the delay is a
nonlinear function of both components. The problem is known to be solvable
in polynomial time under quite restrictive assumptions, i.e., Equal Rate
Allocations (all arcs are reserved the same capacity) and identical
reservation costs, whereas the general problem is *NP*-hard. We first
extend the approaches to the ERA version to a pseudo-polynomial Dynamic
Programming one for integer arc costs, and a FPTAS for the case of general
arc costs. We then show that the general problem can be formulated as a
mixed-integer Second-Order Cone (SOCP) program, and therefore solved with
off-the-shelf technology. We compare two formulations: one based on standard
big-M constraints, and one where Perspective Reformulation techniques are used
to tighten the continuous relaxation. Extensive computational experiments on
both real-world networks and randomly-generated realistic ones show that the
ERA approach is fast and provides an effective heuristic for the general
problem whenever it manages to find a solution at all, but it fails for a
significant fraction of the instances that the SOCP models can solve. We
therefore propose a three-pronged approach that combines the fast running time
of the ERA algorithm and the effectiveness of the SOCP models, and show that
it is capable of solving realistic-sized instances with high accuracy at
different levels of network load in a time compatible with real-time usage in
an operating environment.

**Keywords:** Delay-constrained Routing, Approximation Algorithms,
Mixed-Integer NonLinear Programing, Second-Order Cone
Model, Perspective Reformulation

The final publication is available at Springer via http://dx.doi.org/10.1007/s10957-014-0624-5, but a Technical Report version can be freely downloaded. The paper is also available on SharedIt.

*Technical Report*, **IASI R. 15-06**, 2015

**Abstract:** The Unit Commitment (UC) problem in electrical power
production requires to optimally operate a set of power generation units over
a short time horizon (one day to a week). Operational constraints depend on
the type of the generation units (e.g., thermal, hydro, nuclear, ...). The
Single-Unit Commitment (1UC) problem is the restriction of UC that considers
only one unit; it is useful in deregulated systems (for the so-called
self-scheduling), and when decomposition methods are applied to (multi-units)
UC. Typical constraints in (1UC) concern minimum and maximum power output,
minimum-up and -down time, start-up and shut-down limits, ramp-up and ramp-down
limits. In this work we present the first MIP formulation that describes the
convex hull of the feasible solutions of (1UC) further improved to include also
ramp-up and ramp-down constraints. Our formulation has a polynomial number of
both variables and constraints and it is based on the efficient Dynamic
Programming algorithm proposed in [A16].

**Keywords:** Unit Commitment problem, Ramp Constraints,
MIP Formulations, Dynamic Programming

*The Computer Journal* **58**(6), 1416–1430, 2015

**Abstract:** Computing network paths under worst-case delay constraints
has been the subject of abundant literature in the past two decades. Assuming
Weighted Fair Queueing scheduling at the nodes, this translates to computing
paths and reserving rates at each link. The problem is NP-hard in general,
even for a single path; hence polynomial-time heuristics have been proposed in
the past, that either assume equal rates at each node, or compute the path
heuristically and then allocate the rates optimally on the given path. In this
paper we show that the above heuristics, albeit finding optimal solutions quite
often, can lead to failing of paths at very low loads, and that this could be
avoided by solving the problem, i.e., path computation and rate allocation,
jointly at optimality. This is possible by modeling the problem as a
mixed-integer second-order cone program and solving it optimally in
split-second times for relatively large networks on commodity hardware; this
approach can also be easily turned into a heuristic one, trading a negligible
increase in blocking probability for one order of magnitude of computation
time. Extensive simulations show that these methods are feasible in today's
ISPs networks and they significantly outperform the existing schemes in terms
of blocking probability.

**Keywords:** QoS Routing, Delay bounds, Joint Path Computation and Rate
Allocation, Second-Order Cone Model, Simulations

The published paper is available at http://dx.doi.org/10.1093/comjnl/bxu053, but thanks to the Oxford Journals License to Publish, having passed 12 months from the on-line publication date you can download the POST-PRINT version of the Article here.

*4OR* **13**(2), 115–171, 2015

**Abstract:** The Unit Commitment problem in energy management aims at
finding the optimal productions schedule of a set of generation units while
meeting various system-wide constraints. It has always been a large-scale,
non-convex difficult problem, especially in view of the fact that operational
requirements imply that it has to be solved in an unreasonably small time for
its size. Recently, the ever increasing capacity for renewable generation has
strongly increased the level of uncertainty in the system, making the (ideal)
Unit Commitment model a large-scale, non-convex, *uncertain* (stochastic,
robust, chance-constrained) program. We provide a survey of the literature on
methods for the Uncertain Unit Commitment problem, in all its variants. We
start with a review of the main contributions on solution methods for the
deterministic versions of the problem, focusing on those based on mathematical
programming techniques that are more relevant for the uncertain versions of
the problem. We then present and categorize the approaches to the latter, also
providing entry points to the relevant literature on optimization under
uncertainty.

**Keywords:** Unit Commitment, Uncertainty, Large-Scale Optimization,
Survey

The published paper is available at http://dx.doi.org/10.1007/s10288-014-0279-y. Thanks to the provisions of Springer's Copyright Transfer Statement, you can download the Author's accepted manuscript version here. The paper is also available on SharedIt.

*The Computer Journal* **57**(11), 1616–1623, 2014

**Abstract:** A graph *G* = (*V*, *E*) is called a pairwise
compatibility graph (PCG) if there exists an edge-weighted tree *T* and two
non-negative real numbers *d*_{min} and *d*_{max} such
that each leaf *l _{u}* of

**Keywords:** Pairwise Comparability Graphs, Caterpillar, Centipede,
Wheel

The published paper is available at http://dx.doi.org/10.1093/comjnl/bxt068, but thanks to the Oxford Journals License to Publish, having passed 12 months from the on-line publication date you can download the POST-PRINT version of the Article here.

*Operations Research* **62**(4), 891–909, 2014

**Abstract:** Any institution that disseminates data in aggregated
form has the duty to ensure that individual confidential information is not
disclosed, either by not releasing data or by perturbing the released data,
while maintaining data utility. Controlled tabular adjustment (CTA) is a
promising technique of the second type where a protected table that is
close to the original one in some chosen distance is constructed. The
choice of the specific distance shows a trade-off: while the Euclidean
distance has been shown (and is confirmed here) to produce tables with
greater "utility", it gives rise to Mixed Integer Quadratic Problems
(MIQPs) with pairs of linked semi-continuous variables that are more
difficult to solve than the Mixed Integer Linear Problems corresponding to
linear norms. We provide a novel analysis of Perspective Reformulations
(PRs) for this special structure; in particular, we devise a
*Projected* PR (P^{2}R) which is piecewise-conic
but simplifies to a (nonseparable) MIQP when the instance is symmetric. We
then compare different formulations of the CTA problem, showing that the
ones based on P^{2}R most often obtain better
computational results.

**Keywords:** Mixed Integer Quadratic Programming, Perspective
Reformulation, Data Privacy, Statistical Disclosure Control, Tabular Data
Protection, Controlled Tabular Adjustment

The published paper is available at http://dx.doi.org/10.1287/opre.2014.1293. A Technical Report version is available here.

*CALCOLO* **52**(4), 425–444, 2015

**Abstract:** We analyze the practical efficiency of multi-iterative
techniques for the numerical solution of graph-structured large linear systems.
In particular we evaluate the effectiveness of several combinations of
coarser-grid operators which preserve the graph structure of the projected
matrix at the inner levels and smoothers. We also discuss and evaluate some
possible strategies (inverse projection and dense projection) to connect
coarser-grid operators and graph-based preconditioners. Our results show that
an appropriate choice of adaptive projectors and tree-based preconditioned
conjugate gradient methods result in highly effective and robust approaches,
that are capable to efficiently solve large-scale, difficult systems, for
which the known iterative solvers alone can be rather slow.

**Keywords:** Graph Matrices, Multigrid,
Conditioning and Preconditioning

The final publication is available at Springer via http://dx.doi.org/10.1007/s10092-014-0123-y. A large part of the content of this article can be found in this Technical Report version The paper is also available on SharedIt.

*Proceedings of the 25 ^{th} Annual ACM-SIAM
Symposium on Discrete Algorithms* (SODA14), 1582–1595, Portland,
January 5-7 2014

**Abstract:** The advent of massive datasets (and the consequent design
of high-performing distributed storage systems) have reignited the interest of
the scientific and engineering community towards the design of lossless data
compressors which achieve effective compression ratio and very efficient
decompression speed. Lempel-Ziv's LZ77 algorithm is the de facto choice in this
scenario because of its decompression speed and its flexibility in trading
decompression speed versus compressed-space efficiency. Each of the existing
implementations offers a trade-off between space occupancy and decompression
speed, so software engineers have to content themselves by picking the one
which comes closer to the requirements of the application in their hands.
Starting from these premises, and for the first time in the literature, we
address in this paper the problem of trading optimally, and in a principled
way, the consumption of these two resources by introducing the *Bicriteria
LZ77-Parsing problem*, which formalizes in a principled way what
data-compressors have traditionally approached by means of heuristics. The
goal is to determine an LZ77 parsing which minimizes the space occupancy in
bits of the compressed file, provided that the decompression time is bounded
by a fixed amount (or vice-versa). This way, the software engineer can set
its space (or time) requirements and then derive the LZ77 parsing which
optimizes the decompression speed (or the space occupancy, respectively). We
solve this problem efficiently in O(*n* log^{2}
*n*) time and optimal linear space within a small, additive
approximation, by proving and deploying some specific structural properties
of the weighted graph derived from the possible LZ77-parsings of the input
file. The preliminary set of experiments shows that our novel proposal
dominates all the highly engineered competitors, hence offering a win-win
situation in theory&practice.

The published paper is available at http://dx.doi.org/10.1137/1.9781611973402.115. A Technical Report version is available here.

*Mathematical Programming* **145**(1), 133–161, 2014

**Abstract:** We propose a version of the bundle scheme for convex
nondifferentiable optimization suitable for the case of a sum-function where
some of the components are "easy", that is, they are Lagrangian functions of
explicitly known compact convex programs. This corresponds to a *stabilized
partial Dantzig-Wolfe decomposition*, where suitably modified representations
of the "easy" convex subproblems are inserted in the master problem as an
alternative to iteratively inner-approximating them by extreme points, thus
providing the algorithm with exact information about a part of the dual
objective function. The resulting master problems are potentially larger and
less well-structured than the standard ones, ruling out the available
specialized techniques and requiring the use of general-purpose solvers for
their solution; this strongly favors piecewise-linear stabilizing terms, as
opposed to the more usual quadratic ones, which in turn may have an adverse
effect on the convergence speed of the algorithm, so that the overall
performance may depend on appropriate tuning of all these aspects. Yet, very
good computational results are obtained in at least one relevant application:
the computation of tight lower bounds for Fixed-Charge Multicommodity Min-Cost
Flow problems.

**Keywords:** Nondifferentiable Optimization, Bundle methods,
Lagrangian Relaxation, Partial Dantzig-Wolfe Decomposition,
Multicommodity Network Design.

The published paper is available at http://dx.doi.org/10.1007/s10107-013-0642-3. Thanks to the provisions of Springer's Copyright Transfer Statement, the author's final version of the paper is available here. Actually, the final version of the paper has been simplified somewhat w.r.t. the initial development in order to only use linear duality (as opposed to Fenchel's duality); readers interested in the Convex Analysis perspective of the results may want to consult the Technical Report version as well. The paper is also available on SharedIt.

*SIAM Journal on Optimization* **23**(3), 1784–1809, 2013

**Abstract:** We present a convex nondifferentiable minimization
algorithm of proximal bundle type that does not rely on measuring descent of
the objective function to declare the so-called "serious steps"; rather, a
merit function is defined which is decreased at each iteration, leading to a
(potentially) continuous choice of the stepsize between zero (the null step)
and one (the serious step). By avoiding the discrete choice the convergence
analysis is simplified, and we can more easily obtain efficiency estimates for
the method. Some choices for the step selection actually reproduce the
dichotomic behavior of standard proximal bundle methods, but shedding new light
on the rationale behind the process, and ultimately with different rules;
furthermore, using nonlinear upper models of the function in the step selection
process can lead to actual fractional steps.

**Keywords:** Nonsmooth Optimization, Bundle Methods,
Nonmonotone Algorithm

The published paper is available at http://dx.doi.org/10.1137/120888867. Due to the provisions of SIAM Copyright Assignment Agreement, it can also be downloaded here.

*Mathematical Programming* **140**, 45–76, 2013

**Abstract:** We discuss an algorithmic scheme, which we call the
stabilized structured Dantzig-Wolfe decomposition method, for solving
large-scale structured linear programs. It can be applied when the subproblem of
the standard Dantzig-Wolfe approach admits an alternative master model amenable
to column generation, other than the standard one in which there is a variable
for each of the extreme points and extreme rays of the corresponding polyhedron.
Stabilization is achieved by the same techniques developed for the standard
Dantzig-Wolfe approach and it is equally useful to improve the performance, as
shown by computational results obtained on an application to the multicommodity
capacitated network design problem.

**Keywords:** Dantzig-Wolfe Decomposition Method, Structured
Linear Program, Multicommodity Capacitated Network Design
Problem, Reformulation, Stabilization

The published paper is available at http://dx.doi.org/10.1007/s10107-012-0626-8. Thanks to the provisions of Springer's Copyright Transfer Statement, the author's final version of the paper is available here. A somewhat different presentation of the results is available in the Technical Report version.

*European Journal of Operational Research* **229**(1), 37–40,
2013

**Abstract:** The Continuous Convex Separable Quadratic Knapsack problem
(CQKnP) is an easy but useful model that has very many different applications.
Although the problem can be solved quickly, it must typically be solved very
many times within approaches to (much) more difficult models; hence an
efficient solution approach is required. We present and discuss a small
open-source library for its solution that we have recently developed and
distributed.

**Keywords:** Quadratic Programming, Continuous Nonlinear Resource
Allocation Problem, Lagrangian Relaxation, Optimization
Software.

The published paper is available at http://dx.doi.org/10.1016/j.ejor.2013.02.038. Thanks to the provisions of Elsevier's Journal Publishing Agreement about Permitted Scholarly Posting, the Accepted Author Manuscript is available here.

*Journal of Optimization Theory and Applications*, **155**(2),
430–452, 2012

**Abstract:** We propose a novel generalization of the Canonical
Difference of Convex problem (CDC), and we study the convergence of outer
approximation algorithms for its solution, which use an approximated oracle
for checking the global optimality conditions. Although the approximated
optimality conditions are similar to those of CDC, this new class of problems
is shown to significantly differ from its special case. Indeed, outer
approximation approaches for CDC need be substantially modified in order to cope
with the more general problem, bringing to new algorithms. We develop a
hierarchy of conditions that guarantee global convergence, and we build three
different cutting plane algorithms relying on them.

**Keywords:** Single Reverse Polar Problems, Approximate Optimality
Conditions, Cutting Plane Algorithms

The published paper is available at http://dx.doi.org/10.1007/s10957-012-0069-7. A Technical Report version is available here. The paper is also available on SharedIt.

*Mathematical Programming* **136**(2), 375–402, 2012

**Abstract:** One of the foremost difficulties in solving Mixed-Integer
Nonlinear Programs, either with exact or heuristic methods, is to find a
feasible point. We address this issue with a new feasibility pump algorithm
tailored for nonconvex Mixed-Integer Nonlinear Programs. Feasibility pumps are
algorithms that iterate between solving a continuous relaxation and a
mixed-integer relaxation of the original problems. Such approaches currently
exist in the literature for Mixed-Integer Linear Programs and convex
Mixed-Integer Nonlinear Programs: both cases exhibit the distinctive property
that the continuous relaxation can be solved in polynomial time. In nonconvex
Mixed-Integer Nonlinear Programming such a property does not hold, and
therefore special care has to be exercised in order to allow feasibility pump
algorithms to rely only on local optima of the continuous relaxation. Based on
a new, high level view of feasibility pump algorithms as a special case of the
well-known successive projection method, we show that many possible different
variants of the approach can be developed, depending on how several different
(orthogonal) implementation choices are taken. A remarkable twist of
feasibility pump algorithms is that, unlike most previous successive projection
methods from the literature, projection is "naturally" taken in two different
norms in the two different subproblems. To cope with this issue while retaining
the local convergence properties of standard successive projection methods we
propose the introduction of appropriate *norm constraints* in the
subproblems; these actually seem to significantly improve the practical
performance of the approach. We present extensive computational results on the
MINLPLib, showing the effectiveness and efficiency of our algorithm.

**Keywords:** Feasibility Pump, MINLP, Global Optimization,
nonconvex NLP.

The published paper is available at http://dx.doi.org/10.1007/s10107-012-0608-x. Thanks to the provisions of Springer's Copyright Transfer Statement, the author's final version of the paper is available here. The paper is also available on SharedIt.

*SIAM Journal on Optimization* **21**(4), 1418–1438, 2011

**Abstract:** We present a bundle method for convex nondifferentiable
minimization where the model is a piecewise quadratic convex approximation
of the objective function. Unlike standard bundle approaches, the model only
needs to support the objective function from below at a properly chosen
(small) subset of points, as opposed to everywhere. We provide the
convergence analysis for the algorithm, with a general form of master
problem which combines features of trust-region stabilization and proximal
stabilization, taking care of all the important practical aspects such as
proper handling of the proximity parameters and of the bundle of information.
Numerical results are also reported.

**Keywords:** NonDifferentiable Optimization, Bundle methods,
Quadratic model.

The published paper is available at http://dx.doi.org/10.1137/100817930. Due to the provisions of SIAM Copyright Assignment Agreement, it can also be downloaded here.

*Computers & Operations Research* **38**(12), 1805–1815,
2011

**Abstract:** This paper concerns the problem of minimizing the maximum
link utilization of IP telecommunication networks under the joint use of
traditional IGP routing protocols, such as IS-IS and OSPF, and the more
sophisticated MPLS-TE technology. It is shown that the problem of choosing the
optimal routing, both under working conditions and under single link failure
scenarios, can be casted as a Linear Program of reasonable size. The proposed
model is validated by a computational experimentation performed on synthetic and
real networks: the obtained results show that the new approach considerably
reduces the maximum link utilization of the network with respect to simply
optimizing the IGP weights, at the cost of adding a limited number of label
switched paths (LSPs). Optimizing the set of IGP weights within the overall
approach further improves performances. The computational time needed to solve
the models matches well with real-time requirements, and makes it possible to
consider network design problems.

**Keywords:** Routing problems, LP models, Robustness

The published paper is available at http://dx.doi.org/10.1016/j.cor.2011.02.019. A Technical Report version is available here.

*Operations Research* **59**(5), 1225–1232, 2011

**Abstract:** The *Perspective Relaxation* (PR) is a general
approach for constructing tight approximations to Mixed Integer Non Linear
Programs (MINLP) with semicontinuous variables. The PR of a MINLP can be
formulated either as a Mixed Integer Second Order Cone Program (provided that
the original objective function is SOCP-representable), or as a Semi-Infinite
MINLP. In this paper, we show that under some further assumptions (rather
restrictive, but satisfied in several practical applications), the PR of a
Mixed Integer Quadratic Program can also be reformulated as a piecewise
quadratic problem, ultimately yielding a QP relaxation of roughly the same
size of the standard continuous relaxation. Furthermore, if the original
problem has some exploitable structure, then this structure is typically
preserved in the reformulation, thus allowing the construction of specialized
approaches for solving the PR. We report on implementing these ideas on two
MIQPs with appropriate structure: a sensor placement problem and a
quadratic-cost (single-commodity) network design problem.

**Keywords:** Mixed Integer Non Linear Problems, Semicontinuous Variables,
Perspective Relaxation, Sensor Placement Problem,
Network Design Problem

The published paper is available at http://dx.doi.org/10.1287/opre.1110.0930. A Technical Report version is available here.

*International Journal of Electrical Power and Energy Systems*
**33**, 585–593, 2011

**Abstract:** The short-term Unit Commitment (UC) problem in
hydro-thermal power generation is a fundamental problem in short-term
electrical generation scheduling. Historically, Lagrangian techniques have
been used to tackle this large-scale, difficult Mixed-Integer NonLinear
Program (MINLP); this requires being able to efficiently solve the Lagrangian
subproblems, which has only recently become possible (efficiently enough) for
units subject to significant ramp constraints. In the last years, alternative
approaches have been devised where the nonlinearities in the problem are
approximated by means of piecewise-linear functions, so that UC can be
approximated by a Mixed-Integer Linear Program (MILP); in particular, using a
recently developed class of *valid inequalities* for the problem, called
"Perspective Cuts", significant improvements have been obtained in the
efficiency and effectiveness of the solution algorithms. These two different
approaches have complementary strengths; Lagrangian ones provide very good
lower bounds quickly, but they require sophisticated heuristics–which
may need to be changed every time that the mathematical model
changes–for producing actual feasible solutions. MILP approaches have
been shown to be able to provide very good feasible solutions quickly, but
their lower bound is significantly worse. We present a sequential approach
which combines the two methods, trying to exploit each one's strengths; we
show, by means of extensive computational experiments on realistic instances,
that the sequential approach may exhibit significantly better efficiency than
either of the two basic ones, depending on the degree of accuracy requested
to the feasible solutions.

**Keywords:** Hydro-Thermal Unit Commitment, Mixed-Integer NonLinear
Program Formulations, Lagrangian Relaxation

The published paper is available at http://dx.doi.org/10.1016/j.ijepes.2010.12.013. A Technical Report version is available here.

*Operations Research Letters* **39**(1), 36–39, 2011

**Abstract:** This paper considers the special case of the robust network
design problem where the dominant extreme points of the demand polyhedron have
a disjoint support. In this case static and dynamic routing lead to the same
optimal solution, both for the splittable and the unspittable case. As a
consequence, the robust network design problem with (splittable) dynamic routing
is polynomially solvable, whereas it is co*NP*-Hard in the general case.
This result applies to particular instances of the single-source Hose model.

**Keywords:** Robust Optimization, Network Design, Hose Model

The published paper is available at http://dx.doi.org/10.1016/j.orl.2010.11.009. A Technical Report version is available here (but be advised that the final paper has changed significantly).

in *Lecture Notes in Computer Science* **6683**,
5^{th} Learning and Intelligent OptimizatioN
Conference - LION 5, C.A. Coello Coello (Ed.), 407–422,
Springer-Verlag, 2011

**Abstract:** Reformulation is one of the most useful and widespread
activities in mathematical modeling, in that finding a "good" formulation is a
fundamental step in being able so solve a given problem. Currently, this is
almost exclusively a human activity, with next to no support from modeling and
solution tools. In this paper we show how the reformulation system defined in
[13]
allows to automatize the task of exploring the formulation space of a problem,
using a specific example (the Hyperplane Clustering Problem). This nonlinear
problem admits a large number of both linear and nonlinear formulations, which
can all be generated by defining a relatively small set of general Atomic
Reformulation Rules (ARR). These rules are not problem-specific, and could be
used to reformulate many other problems, thus showing that a general-purpose
reformulation system based on the ideas developed in
[13]
could be feasible.

The published paper is available at www.springerlink.com. Thanks to the copyright agreement of Lecture Notes on Computer Science, the author's final version is available here.

*Journal of Global Optimization* **46**(2), 163–189, 2010

**Abstract:** The paper discusses a general framework for outer
approximation type algorithms for the canonical DC optimization problem. The
algorithms rely on a polar reformulation of the problem and exploit an
approximated oracle in order to check global optimality. Consequently,
approximate optimality conditions are introduced and bounds on the quality of
the approximate global optimal solution are obtained. A thorough analysis of
properties which guarantee convergence is carried out; two families of
conditions are introduced which lead to design six implementable algorithms,
whose convergence can be proved within a unified framework.

**Keywords:** DC problems, Polar Set, Approximate Optimality Conditions,
Cutting Plane Algorithms

The published paper is available at http://dx.doi.org/10.1007/s10898-009-9415-1. A Technical Report version is available here. The paper is also available on SharedIt.

in *Lecture Notes in Computer Science* **6049**,
9^{th} International Symposium on Experimental
Algorithms - SEA 2010, P. Festa (Ed.), Springer-Verlag, 350–360,
2010

**Abstract:** We present a new Feasibility Pump algorithm tailored for
nonconvex Mixed Integer Nonlinear Programming problems. Differences with the
previously proposed Feasibility Pump algorithms and difficulties arising from
nonconvexities in the models are extensively discussed. The main methodological
innovations of this variant are: (a) the first subproblem is a nonconvex
continuous Nonlinear Program, which is solved using global optimization
techniques; (b) the solution method for the second subproblem is complemented
by a tabu list. We exhibit computational results showing the good performance
of the algorithm on instances taken from the MINLPLib.

**Keywords:** Mixed-Integer Nonlinear Programming, nonconvex,
heuristic method, experiments.

The original publication is available at www.springerlink.com. Thanks to the copyright agreement of Lecture Notes on Computer Science, the author's final version is available here.

*Operations Research Letters* **38**, 341–345, 2010

**Abstract:** Interval-gradient cuts are (nonlinear) valid inequalities
derived from continuously differentiable nonconvex constraints. In this paper
we define interval-subgradient cuts, a generalization to nondifferentiable
constraints, and show that no-good cuts with 1-norm are a special case of
interval-subgradient cuts. We then briefly discuss what happens if other norms
are used.

The published paper is available at http://dx.doi.org/10.1016/j.orl.2010.05.010. A Technical Report version is available here

*Proceedings of the European Workshop on Mixed Integer Nonlinear
Programming 2010* (EWMINLP10), P. Bonami, L. Liberti, A.J. Miller,
A. Sartenaer editors, Marseille, April 12-16 2010

**Abstract:** The *Perspective Relaxation* (PR) is a general approach
for constructing tight approximations to Mixed-Integer NonLinear Problems with
semicontinuous variables. The PR of a MINLP can be formulated either as a
Mixed-Integer Second-Order Cone Program, provided that the original objective
function is SOCP-representable, or as a Semi-Infinite MINLP. We show that under
some further assumptions-rather restrictive, but satisfied in several practical
applications-the PR of Mixed-Integer Quadratic Programs can also be reformulated
as a piecewise linear-quadratic problem of roughly the same size of the standard
continuous relaxation. Furthermore, if the original problem has some exploitable
structure, this is typically preserved in the reformulation, allowing to
construct specialized approaches for solving the PR. We report on implementing
these ideas on two MIQPs with appropriate structure: a sensor placement problem
and a Quadratic-cost (single-commodity) network design problem.

**Keywords:** Mixed-Integer Quadratic Problems, Perspective Relaxation

in *Complex Systems Design & Management: Proceedings of the First
International Conference on Complex Systems Design & Management
CSDM 2010*, M. Aiguier, F. Bretaudeau and D. Krob (Eds.),
Springer-Verlag, 85–97, 2010

**Abstract:** Complex, hierarchical, multi-scale industrial and
natural systems generate increasingly large mathematical models. I-DARE is a
structure-aware modeling-reformulating-solving environment based on
Declarative Programming, that allows the construction of complex structured
models. The main aim of the system is to produce models that can be
*automatically and algorithmically* reformulated to search for the "best"
formulation, intended as the one for which the most efficient solution approach
is available. This requires exploration of a high-dimensional space comprising
all (structured) reformulations of a given instance, all available solvers for
(each part of) the formulation, and all possible configurations of the relevant
algorithmic parameters for each solver. A fundamental pre-requisite for this
exploration is the ability to predict the efficiency of a given (set of)
algorithm(s), considering their configuration(s), for a given instance; this
is, however, a vastly nontrivial task. This article describes how the I-DARE
system organizes the information on the instance at hand in order to make the
search in the (formulation, solver, configuration) space possible with several
different exploration techniques. In particular, we propose a way to combine
general machine learning mechanisms and ad-hoc methods, where available, in
order to effectively compute the "objective function" of the search, i.e., the
prediction of the effectiveness of a point in the space. We also discuss how
this mechanism can take upon itself part of the exploration, the one in the
sub-space of configurations, thus simplifying the task to the rest of the
system by reducing the dimensionality of the search space it has to
traverse.

**Keywords:** Mathematical Models, Machine Learning,
Automatic Algorithm Selection, Reformulation

Download a Technical Report version.

*Discrete Applied Mathematics* **157**(6), 1167–1184, 2009

**Abstract:** Column generation algorithms are instrumental in many areas
of applied optimization, where linear programs with an enormous number of
columns need to be solved. Although succesfully employed in many applications,
these approaches suffer from well-known *instability* issues that
somewhat limit their efficiency. Building on the theory developed for
nondifferentiable optimization algorithms, a large class of stabilized column
generation algorithms can be defined which avoid the instability issues by
using an explicit stabilizing term in the dual; this amounts at considering
a (generalized) augmented Lagrangian of the primal master problem. Since the
theory allows for a great degree of flexibility in the choice and in the
management of the stabilizing term, one can use piecewise-linear or quadratic
functions that can be efficiently dealt with off-the-shelf solvers. The
effectiveness in practice of this approach is demonstrated by extensive
computational experiments on large-scale Vehicle and Crew Scheduling
problems. Also, the results of a detailed computational study on the impact
of the different choices in the stabilization term (shape of the function,
parameters), and their relationships with the quality of the initial dual
estimates, on the overall effectiveness of the approach are reported,
providing practical guidelines for selecting the most appropriate variant
in different situations.

**Keywords:** Column Generation, Proximal Point methods, Bundle methods,
Vehicle and Crew Scheduling problems

The published paper is available at http://dx.doi.org/10.1016/j.dam.2008.06.021. A Technical Report version is available here.

*Optimization Methods and Software* **25**(1), 19–27, 2009

**Abstract:** In this paper we study approximate optimality conditions
for the Canonical DC (CDC) optimization problem and their relationships with
stopping criteria for a large class of solution algorithms for the problem. In
fact, global optimality conditions for CDC are very often restated in terms of
a nonconvex optimization problem, that has to be solved each time the
optimality of a given tentative solution has to be checked. Since this is in
principle a costly task, it makes sense to only solve the problem approximately,
leading to an inexact stopping criteria and therefore to approximate optimality
conditions. In this framework, it is important to study the relationships
between the approximation in the stopping criteria and the quality of the
solutions that the corresponding approximated optimality conditions may
eventually accept as optimal, in order to ensure that a small tolerance in the
stopping criteria does not lead to a disproportionally large approximation of
the optimal value of the CDC problem. We develop conditions ensuring that this
is the case; these turn out to be closely related with the well-known concept
of *regularity* of a CDC problem, actually coinciding with the latter if
the reverse-constraint set is a polyhedron.

**Keywords:** DC problems, Approximate Optimality Conditions,
Value Function, Lipschitz Property

The published paper is available at http://dx.doi.org/10.1080/10556780903178048. Most of the material of this paper can be found in Sections 2 and 3 of this Technical Report.

in *Proceedings of the 7 ^{th} International
Workshop on the Design of Reliable Communication Networks* (DRCN 2009),
D. Medhi, J. Doucette and D. Tipper editors, IEEE, 273–280,
October 25-28 2009

**Abstract:** The paper describes an optimization model which aims at
minimizing the maximum link utilization of IP telecommunication networks under
the joint use of the traditional IGP protocols and the more sophisticated
MPLS-TE technology. The survivability of the network is taken into account in
the optimization process implementing the path restoration scheme. This scheme
benefits of the Fast Re-Route (FRR) capability allowing service providers to
offer high availability and high revenue SLAs (Service Level Agreements). The
hybrid IGP/MPLS approach relies on the formulation of an innovative Linear
Programming mathematical model that, while optimizing the network utilization,
provides optimal user performance, efficient use of network resources, and
100% survivability in case of single link failure. The possibility of
performing an optimal exploitation of the network resources throughout the
joint use of the IGP and MPLS protocols provides a flexible tool for the ISP
(Internet Service Provider) networks traffic engineers. The efficiency of the
proposed approach is validated by a wide experimentation performed on synthetic
and real networks. The obtained results show that a small number of LSP tunnels
have to be set up in order to significantly reduce the congestion level of the
network while at the same time guaranteeing the survivability of the
network.

**Keywords:** Network Reliability, Optimal Design, Path Restoration,
Routing

The published paper is available at http://dx.doi.org/10.1109/DRCN.2009.5339995. A Technical Report version is available here

*SIAM Journal on Optimization* **20**(1), 357–386, 2009

**Abstract:** Subgradient methods for nondifferentiable optimization
benefit from *deflection*, i.e., defining the search direction as a
combination of the previous direction and the current subgradient. In the
constrained case they also benefit from *projection* of the search
direction onto the feasible set prior to computing the steplength, that is,
from the use of conditional subgradient techniques. However, combining the
two techniques is not straightforward, especially if an *inexact oracle*
is available which can only compute approximate function values and
subgradients. We present a convergence analysis of several different variants,
both conceptual and implementable, of approximate conditional deflected
subgradient methods. Our analysis extends the available results in the
literature by using the main stepsize rules presented so far while
allowing deflection in a more flexible way. Furthermore, to allow for
(diminishing/square summable) rules where the stepsize is tightly controlled
a-priori, we propose a new class of deflection-restricted approaches where it
is the deflection parameter, rather than the stepsize, which is dynamically
adjusted using the "target value" of the optimization sequence. For both
Polyak-type and diminishing/square summable stepsizes, we propose a
"correction" of the standard formula which shows that, in the inexact case,
knowledge about the error computed by the oracle (which is available in
several practical applications) can be exploited in order to strengthen the
convergence properties of the method. The analysis allows for several variants
of the algorithm; at least one of them is likely to show numerical
performances similar to these of "heavy ball" subgradient methods, popular
within backpropagation approaches to train neural networks, while
possessing stronger convergence properties.

**Keywords:** Convex Programming, Nondifferentiable Optimization,
Subgradient Methods, Convergence Analysis,
Lagrangian relaxation, Backpropagation

The published paper is available at http://dx.doi.org/10.1137/080718814. Due to the provisions of SIAM Copyright Assignment Agreement, it can also be downloaded here.

*Discrete Applied Mathematics* **157**(6), 1229–1241, 2009

**Abstract:** We study 0-1 reformulations of the multicommodity
capacitated network design problem, which is usually modeled with general
integer variables to represent design decisions on the number of facilities
to install on each arc of the network. The reformulations are based on the
multiple choice model, a generic approach to represent piecewise linear
costs using 0-1 variables. This model is improved by the addition of
extended linking inequalities, derived from variable disaggregation
techniques. We show that these extended linking inequalities for the 0-1
model are equivalent to the residual capacity inequalities, a class of
valid inequalities derived for the model with general integer variables. In
this paper, we compare two cutting-plane algorithms to compute the same
lower bound on the optimal value of the problem: one based on the
generation of residual capacity inequalities within the model with general
integer variables, and another based on the addition of extended linking
inequalities to the 0-1 reformulation. To further improve the computational
results of the latter approach, we develop a *column-and-row
generation* approach; the resulting algorithm is shown to be competitive
with the approach relying on residual capacity inequalities.

**Keywords:** Multicommodity Capacitated Network Design, Reformulation,
Valid Inequalities, Cutting-Plane Method,
Column Generation

The published paper is available at http://dx.doi.org/10.1016/j.dam.2008.04.022. A Technical Report version is available here.

*Operations Research Letters* **37**(3), 206–210, 2009

**Abstract:** The *Perspective Reformulation* is a general approach
for constructing tight approximations to MINLP problems with semicontinuous
variables. Two different reformulations have been proposed for solving it, one
resulting in a Second-Order Cone Program, the other based on representing the
perspective function by (an infinite number of) cutting planes. We compare the
two reformulations on two sets of MIQPs to determine which one is most
effective in the context of exact or approximate Branch-and-Cut algorithms.

**Keywords:** Mixed-Integer Non Linear Programs, Reformulations,
Second-Order Cone Programs, Valid Inequalities,
Unit Commitment problem, Portfolio Optimization

The published paper is available at http://dx.doi.org/10.1016/j.orl.2009.02.003. A Technical Report version is available here

*Proceedings of the 4 ^{th} International Network
Optimization Conference* (INOC2009), G. Bigi, A. Frangioni and
M.G. Scutellà editors, paper MD3-1, Pisa, April 26-29, 2009

**Abstract:** The *Perspective Relaxation* (PR) is a general approach
for constructing tight approximations to Mixed-Integer NonLinear Problems with
semicontinuous variables. The PR of a MINLP can be formulated either as a
Mixed-Integer Second-Order Cone Program (provided that the original objective
function is SOCP-representable), or as a Semi-Infinite MINLP. While these
reformulations significantly improve the lower bound and the running times of
the corresponding enumerative approaches, they may spoil some valuable
structure of the problem, such as the presence of network constraints. In this
paper, we show that under some further assumptions the PR of a Mixed-Integer
Quadratic Program can also be reformulated as a piecewise linear-quadratic
problem, ultimately yielding a QP relaxation of roughly the same size of the
standard continuous relaxation and where the (network) structure of the
original problem is safeguarded. We apply this approach to a quadratic-cost
single-commodity network design problem, comparing the newly developed
algorithm with those based on both the standard continuous relaxation and the
two usual variants of PR relaxation.

**Keywords:** Mixed-Integer NonLinear Problems, Semicontinuous Variables,
Perspective Relaxation, Nonlinear Network Design Problem

*IEEE Transactions on Power Systems* **24**(1), 105–113,
2009

**Abstract:** The short-term Unit Commitment (UC) problem in hydro-thermal
power generation is a large-scale, Mixed-Integer NonLinear Program (MINLP),
which is difficult to solve efficiently, especially for large-scale instances.
It is possible to approximate the nonlinear objective function of the problem
by means of piecewise-linear functions, so that UC can be approximated by a
Mixed-Integer Linear Program (MILP); applying the available efficient
general-purpose MILP solvers to the resulting formulations, good quality
solutions can be obtained in a relatively short amount of time. We build on
this approach, presenting a novel way to approximating the nonlinear objective
function based on a recently developed class of *valid inequalities* for
the problem, called "Perspective Cuts". At least for many realistic
instances of a general basic formulation of UC, a MILP-based heuristic
obtains comparable or slightly better solutions in less time when employing
the new approach rather than the standard piecewise linearizations, while
being not more difficult to implement and use. Furthermore, "dynamic"
formulations, whereby the approximation is iteratively improved, provide even
better results if the approximation is appropriately controlled.

**Keywords:** Hydro-Thermal Unit Commitment, Mixed-Integer Linear Program
Formulations, Valid Inequalities

The original publication is available at http://dx.doi.org/10.1109/TPWRS.2008.2004744. A Technical Report version is available here.

*Technical Report* **09-13**, Dipartimento di Informatica,
Università di Pisa, 2009

**Abstract:** Complex, hierarchical, multi-scale industrial and natural
systems generate increasingly large mathematical models. Practitioners are
usually able to formulate such models in their "natural" form; however,
solving them often requires finding an appropriate reformulation to reveal
structures in the model which make it possible to apply efficient, specialized
approaches. I-DARE is a structure-aware modeling-reformulation-solving
environment based on Declarative Programming. It allows the construction of
complex structured models, that can be automatically and algorithmically
reformulated to search for the best formulation, intended as the one for which
the most efficient solution approach is available. In order to accommodate
(potentially) all possible specialized solution methods, it defines a general
software framework for solvers, that are "registered" to specific problem
structures. This article describes in details the application of Artificial
Intelligence in the modeling and reformulation modules of I-DARE, showing how
Declarative Programming can be used to design a structure-aware modeling
environment that allows for a new automatic reformulation methodology.

**Keywords:** Mathematical Models, Optimization Problems,
Artificial Intelligence, Declarative Programming

*Proceedings of the 4 ^{th} International Network
Optimization Conference* (INOC2009), G. Bigi, A. Frangioni and
M.G. Scutellà editors, paper TC2-2, Pisa, April 26-29, 2009

**Abstract:** When designing or upgrading a communication network,
operators are faced with a major issue, as uncertainty on communication
demands makes it difficult to correctly provision the network capacity.
When a probability on traffic matrices is given, finding the cheapest capacity
allocation that guarantees, within a prescribed level of confidence, that each
arc can support the traffic demands peaks turns out to be, in general, a
difficult non convex optimization problem belonging to the class of chance
constrained problems. Drawing from some very recent results in the literature
we highlight the relationships between chance constrained network design
problems and robust network optimization. We then compare several different
ways to build uncertainty sets upon deviation measures, comprised the recently
proposed backward and forward deviation measures that capture possible
asymmetries of the traffic demands distribution. We report results of a
computational study aimed at comparing the performance of different models when
built upon the same set of historical traffic matrices.

**Keywords:** Chance Constraint, Robust Optimization, Network Design,
Deviation Measures, Routing.

Chapter 30 of *Scienza delle decisioni in Italia: applicazioni della
ricerca operativa a problemi aziendali*, G. Felici e A. Sciomachen (Eds.),
EGIC Genova, 429–442, 2008

**Abstract:** Molti problemi reali di schedulazione di veicoli e personale
possono essere formulati come varianti di problemi di Set Partitioning, con
vincoli complicanti, di grandissime dimensioni. Questi problemi possono essere
risolti mediante tecniche basate sulla generazione di colonne, utilizzando
sottoproblemi di tipo cammino minimo vincolato. M.A.I.O.R. utilizza da molti
anni questo tipo di tecnologie per i propri clienti che operano nell'ambito del
trasporto su gomma urbano ed extraurbano, aereo e ferroviario. Recentemente,
nella suite degli strumenti M.A.I.O.R. sono stati introdotti alcuni
miglioramenti algoritmici che hanno portato a consistenti miglioramenti
dell'efficacia degli approcci in tutte le applicazioni. In questo capitolo
descriviamo tali miglioramenti ed il loro impatto sulle prestazioni degli
algoritmi in diverse applicazioni reali, focalizzandoci in particolare su quei
problemi in cui la necessità di risolvere formulazioni nonstandard e/o con
molti vincoli complicanti rendeva il precedente approccio scarsamente
funzionale, mentre le nuove tecniche permettono di ottenere risultati di buona
qualità.

*International Journal of Electrical Power and Energy Systems*
**30**, 316–326, 2008

**Abstract:** Lagrangian Relaxation (LR) algorithms are among the most
successful approaches for solving large-scale hydro-thermal Unit Commitment
(UC) problems; this is largely due to the fact that the Single-Unit Commitment
(1UC) problems resulting from the decomposition, incorporating many kinds of
technical constraints such as minimum up- and down-time requirements and
time-dependent startup costs, can be efficiently solved by Dynamic Programming
(DP) techniques. Ramp constraints have historically eluded efficient exact DP
approaches; however, this has recently changed
[A16]. We show that the newly proposed DP
algorithm for ramp-constrained (1UC) problems allows to extend existing LR
approaches to ramp-constrained (UC); this is not obvious since the heuristic
procedures typically used to recover a primal feasible solution are not easily
extended to take ramp limits into account. However, dealing with ramp
constraints in the subproblems turns out to be sufficient to provide the LR
heuristic enough guidance to produce good feasible solutions even with no
other modification of the approach; this is due to the fact that
(sophisticated) LR algorithms to (UC) duly exploit the primal information
computed by the Lagrangian Dual, which in the proposed approach is ramp
feasible. We also show by computational experiments that the LR is competitive
with those based on general-purpose Mixed-Integer Program (MIP) solvers for
large-scale instances, especially hydro-thermal ones.

**Keywords:** Hydro-Thermal Unit Commitment, Ramp Limits,
Lagrangian Relaxation

The published paper is available at http://dx.doi.org/10.1016/j.ijepes.2007.10.003. A Technical Report version is available here.

*Optimization Methods and Software* **22**(4), 573–585,
2007

**Abstract:** Interior Point (IP) algorithms for Min Cost Flow (MCF)
problems have been shown to be competitive with combinatorial approaches,
at least on some problem classes and for very large instances. This is in
part due to availability of specialized crossover routines for MCF; these
allow early termination of the IP approach, sparing it with the final
iterations where the KKT systems become more difficult to solve. As the
crossover procedures are nothing but combinatorial approaches to MCF that
are only allowed to perform few iterations, the IP algorithm can be seen as
a complex "multiple crash start" routine for the combinatorial ones. We
report our experiments about allowing one primal-dual combinatorial
algorithm to MCF to perform as many iterations as required to solve the
problem after being warm-started by an IP approach. Our results show that
the efficiency of the combined approach critically depends on the accurate
selection of a set of parameters among very many possible ones, for which
designing accurate guidelines appears not to be an easy task; however, they
also show that the combined approach can be competitive with the original
combinatorial algorithm, at least on some "difficult" instances.

**Keywords:** Min Cost Flow problems, Interior Point Algorithms,
Relaxation Method, Crossover

Download the author's version of the work, posted here by permission of Taylor & Francis for personal use, not for redistribution. The definitive version was published in Optimization Methods and Software http://dx.doi.org/10.1080/00207160600848017.

*Computational Optimization and Applications*, **36**(2-3),
271–287, 2007

**Abstract:** Support-graph preconditioners have been shown to be a
valuable tool for the iterative solution, via a Preconditioned Conjugate
Gradient method, of the KKT systems that must be solved at each iteration
of an Interior Point algorithm for the solution of Min Cost Flow problems.
These preconditioners extract a proper triangulated subgraph, with
``large'' weight, of the original graph: in practice, trees and
Brother-Connected Trees (BCTs) of depth two have been shown to be the
most computationally efficient families of subgraphs. In the literature,
approximate versions of the Kruskal algorithm for maximum-weight spanning
trees have most often been used for choosing the subgraphs; Prim-based
approaches have been used for trees, but no comparison have ever been
reported. We propose Prim-based heuristics for BCTs, which require
nontrivial modifications w.r.t. the previously proposed Kruskal-based
approaches, and present a computational comparison of the different
approaches, which shows that Prim-based heuristics are most often
preferable to Kruskal-based ones.

**Keywords:** Min Cost Flow problems, Interior Point Algorithms,
Preconditioned Conjugated Gradient method,
Prim Algorithm

The published paper is available at http://dx.doi.org/10.1007/s10589-006-9005-9. A Technical Report version is available here. The paper is also available on SharedIt.

*Operations Research Letters* **35**(2), 181–185, 2007

**Abstract:** Perspective cuts are a computationally effective family
of valid inequalities, belonging to the general family of disjunctive cuts,
for Mixed-Integer Convex NonLinear Programming problems with a specific
structure. The required structure can be forced upon models that would not
originally display it by decomposing the Hessian of the problem into the
sum of two positive semidefinite matrices, a generic and a diagonal one, so
that the latter is "as large as possible". We compare two ways for
computing the diagonal matrix: an inexpensive approach requiring a minimum
eigenvalue computation and a more costly procedure which require the
solution of a SemiDefinite Programming problem. The latter dramatically
outperforms the former at least upon instances of the Mean-Variance problem
in portfolio optimization.

**Keywords:** Mixed-Integer Quadratic Programs, Valid Inequalities,
SemiDefinite Programming, Portfolio Optimization

The published paper is available at http://dx.doi.org/10.1016/j.orl.2006.03.008. A Technical Report version is available here.

*Mathematical Programming* **106**(2), 225–236, 2006

**Abstract:** We show that the convex envelope of the objective function
of Mixed-Integer Programming problems with a specific structure is the
perspective function of the continuous part of the objective function. Using a
characterization of the subdifferential of the perspective function, we derive
"perspective cuts", a family of valid inequalities for the problem. Perspective
cuts can be shown to belong to the general family of disjunctive cuts, but they
do not require the solution of a potentially costly nonlinear programming
problem to be separated. Using perspective cuts substantially improves the
performance of Branch-and-Cut approaches for at least two models that, either
"naturally" or after a proper reformulation, have the required structure: the
Unit Commitment problem in electrical power production and the Mean-Variance
problem in portfolio optimization.

**Keywords:** Mixed-Integer Programs, Valid Inequalities,
Unit Commitment Problem, Portfolio Optimization

The published paper is available at http://dx.doi.org/10.1007/s10107-005-0594-3. A Technical Report version is available here. The paper is also available on SharedIt.

*Operations Research* **54**(4), 767–775, 2006

**Abstract:** We present a dynamic programming algorithm for solving the
single-Unit Commitment (1UC) problem with ramping constraints and arbitrary
convex cost function. The algorithm is based on a new approach for efficiently
solving the single-unit Economic Dispatch (ED) problem with ramping constraints
and arbitrary convex cost functions, improving on previously known ones that
were limited to piecewise-linear functions. For simple convex functions, such
as the quadratic ones typically used in applications, the solution cost of all
the involved (ED) problems, comprised that of finding an optimal primal and
dual solution, is O(*n*^{3}). Coupled with a
"smart" visit of the state-space graph in the dynamic programming algorithm,
this enables one to solve (1UC) in O(*n*^{3})
overall.

**Keywords:** Dynamic Programming, Unit Commitment Problem,
Ramping Constraints

The published paper is available at http://dx.doi.org/10.1287/opre.1060.0309. The Author's version is available here.

in *Proceedings of the 19 ^{th} Mini-EURO Conference
in Operational Research Models and Methods in the Energy Sector* (ORMMES
2006), Coimbra, 6-8 September 2006

**Abstract:** Lagrangian Relaxation (LR) algorithms are among the most
successful approaches for solving large-scale hydro-thermal Unit Commitment
(UC) problems; this is largely due to the fact that the Single-Unit Commitment
(1UC) problems resulting from the decomposition can be efficiently solved by
Dynamic Programming (DP) techniques. Ramp constraints have historically eluded
efficient exact DP approaches; however, this has recently changed
[A16]. We show that the newly proposed DP
algorithm for ramp-constrained (1UC) problems, together with a new heuristic
phase that explicitly takes into account ramp limits on the maximum and minimum
available power at each hour, can reliably find good-quality solutions to even
large-scale (UC) instances in short time.

**Keywords:** Hydro-Thermal Unit Commitment, Ramp Limits,
Lagrangian Relaxation

*INFORMS Journal On Computing* **18**(1), 61–70, 2006

**Abstract:** In the last two decades, a number of algorithms for the
linear single-commodity Min Cost Flow problem (MCF) have been proposed, and
several efficient codes are available that implement different variants of the
algorithms. The practical significance of the algorithms has been tested by
comparing the time required by their implementations for solving "from scratch"
instances of (MCF), of different classes, as the size of the problem (number of
nodes and arcs) increases. However, in many applications several closely related
instances of (MCF) have to be sequentially solved, so that reoptimization
techniques can be used to speed up computations, and the most attractive
algorithm is the one which minimizes the total time required to solve all the
instances in the sequence. In this paper we compare the performances of four
different efficient implementations of algorithms for (MCF) under cost
reoptimization in the context of decomposition algorithms for the
Multicommodity Min Cost Flow problem (MMCF), showing that for some classes of
instances the relative performances of the codes doing "from scratch"
optimization do not accurately predict the relative performances when
reoptimization is used. Since the best solver depends both on the class and on
the size of the instance, this work also shows the usefulness of a standard
interface for (MCF) problem solvers that we have proposed and implemented.

**Keywords:** Min Cost Flow problem, Multicommodity Min Cost Flow Problem,
Reoptimization.

The published paper is available at http://dx.doi.org/10.1287/ijoc.1040.0081, as well as here

*Annals of Operations Research* **139**, 163–193, 2005

**Abstract:** It is well-known that the Lagrangian dual of an Integer
Linear Program (ILP) provides the same bound as a continuous relaxation
involving the convex hull of all the optimal solutions of the Lagrangian
relaxation. It is less often realized that this equivalence is effective, in
that basically all known algorithms for solving the Lagrangian dual either
naturally compute an (approximate) optimal solution of the "convexified
relaxation", or can be modified to do so. After recalling these results we
elaborate on the importance of the availability of primal information produced
by the Lagrangian dual within both exact and approximate approaches to the
original (ILP), using three optimization problems with different structure to
illustrate some of the main points.

**Keywords:** Lagrangian dual, Integer Linear Programs

The published paper is available at http://dx.doi.org/10.1007/s10479-005-3447-9. A Technical Report version is available here.

*Proceedings of the 2 ^{nd} International Network
Optimization Conference* (INOC2005), L. Gouveia and C. Mourao editors,
Vol.

**Abstract:** We study 0-1 reformulations of the Network Loading problem,
a capacitated network design problem which is usually modeled with general
integer variables to represent design decisions on the number of facilities to
install on each arc of the network. The reformulations are based on the
multiple choice model, a generic approach to represent piecewise linear costs
using 0-1 variables. This model is improved by the addition of extended linking
inequalities, derived from variable disaggregation techniques. We show that
these extended linking inequalities for the 0-1 model are equivalent to the
residual capacity inequalities, a class of valid inequalities derived for the
model with general integer variables. This result yields three strategies to
compute the same lower bound on the optimal value of the problem: 1) A
Dantzig-Wolfe (DW) approach applied to the model with general integer
variables; 2) A cutting-plane algorithm based on the residual capacity
inequalities; 3) A Structured DW method that solves the 0-1 reformulation with
extended linking inequalities by variables and constraints generation.

**Keywords:** Capacitated Network Design, Network Loading,
Reformulation, Dantzig-Wolfe Decomposition

*Mathematical Programming* **104**(2-3), 375–388, 2005

**Abstract:** The semimetric polytope is an important polyhedral structure
lying at the heart of hard combinatorial problems. Therefore, linear
optimization over the semimetric polytope is crucial for a number of relevant
applications. Building on some recent polyhedral and algorithmic results about
a related polyhedron, the rooted semimetric polytope, we develop and test
several approaches, mainly based over Lagrangian relaxation and application of
Non Differentiable Optimization algorithms, for linear optimization over the
semimetric polytope. We show that some of these approaches can obtain very
accurate primal and dual solutions in a small fraction of the time required for
the same task by state-of-the-art general purpose linear programming
technology. In some cases, good estimates of the dual optimal solution (but not
of the primal solution) can be obtained even quicker.

**Keywords:** Semimetric Polytope, Lagrangian Methods, Max-Cut,
Network Design

The published paper is available at http://dx.doi.org/10.1007/s10107-005-0620-5. A Technical Report version is available here. The paper is also available on SharedIt.

*Journal of Combinatorial Optimization* **8**, 195–220,
2004

**Abstract:** We propose new local search algorithms for minimum makespan
machine scheduling problems, which perform multiple exchanges of jobs among
machines. Inspired by the work of Thompson and Orlin (1989) on cyclic transfer
neighborhood structures, we model multiple exchanges of jobs as special
disjoint cycles and paths in a suitably defined improvement graph, by extending
definitions and properties introduced in the context of VRP (Thompson and
Psaraftis, 1993) and of CMST (Ahuja, Orlin and Sharma, 1998). Several
algorithms for searching the neighborhood are suggested. We report the results
of a wide computational experimentation, on different families of benchmark
instances, performed for the case of identical machines. The minimum makespan
machine scheduling problem with identical machines has been selected as a case
study to perform a comparison among the alternative algorithms, and to discover
families of instances for which the proposed neighborhood may be promising in
practice. The obtained results are very interesting. On some families of
instances, which are very hard to solve exactly, the most promising
multi-exchange algorithms proved to dominate, in gap and in time, competitive
benchmark heuristics.

**Keywords:** Production/scheduling, Approximations/heuristic:
Multi-Exchange Neighborhood. Networks/graphs,
Flow Algorithms: Disjoint Cycle Computation

The published paper is available at http://dx.doi.org/10.1023/B:JOCO.0000031420.05971.29. A Technical Report version is available here. The paper is also available on SharedIt.

in *Integer Programming and Combinatorial Optimization - IPCO 2004*,
D. Bienstock and G. Nemhauser (Eds.), Lecture Notes in Computer Science
**3064**, Springer-Verlag, 431–443, 2004

The original publication is available at www.springerlink.com. Thanks to the copyright agreement of Lecture Notes on Computer Science, you can download the author's final version here.

*SIAM Journal on Optimization* **14**(3), 894–913, 2004

**Abstract:** We propose a new set of preconditioners for the iterative
solution, via a Preconditioned Conjugate Gradient (PCG) method, of the KKT
systems that must be solved at each iteration of an Interior Point (IP)
algorithm for the solution of linear Min Cost Flow (MCF) problems. These
preconditioners are based on the idea of extracting a proper triangulated
subgraph of the original graph which strictly contains a spanning tree. We
define a new class of triangulated graphs, called *Brother-Connected
Trees* (BCT), and discuss some fast heuristics for finding BCTs of "large"
weight. Computational experience shows that the new preconditioners can
complement tree preconditioners, outperforming them both in iterations count
and in running time on some classes of graphs.

**Keywords:** Min Cost Flow problems, Interior Point Algorithms,
Preconditioned Conjugated Gradient method,
Triangulated Graphs.

The published paper is available at http://dx.doi.org/10.1137/S105262340240519X. Due to the provisions of SIAM Copyright Assignment Agreement, it can also be downloaded here.

*IEEE Transactions on Power Systems*, **18**(1), 313–323,
2003

**Abstract:**The paper presents a simple and effective Lagrangian
relaxation approach for the solution of the optimal short-term unit commitment
problem in hydrothermal power-generation systems. The proposed approach, based
on a disaggregated Bundle method for the solution of the dual problem, with a
new warm-starting procedure, achieves accurate solutions in few iterations. The
adoption of a disaggregated Bundle method not only improves the convergence of
the proposed approach but also provides information that are suitably exploited
for generating a feasible solution of the primal problem and for obtaining an
optimal hydro scheduling. A comparison between the proposed Lagrangian approach
and other ones, based on subgradient and Bundle methods, is presented for a
simple yet reasonable formulation of the Hydrothermal Unit Commitment
problem.

**Keywords:** Power Generation Operation, Hydrothermal Unit Commitment,
Power Generation Dispatch, Lagrangian Relaxation,
Bundle Methods

The published paper is available at http://dx.doi.org/10.1109/TPWRS.2002.807114. A Technical Report version is available here.

in *Proceedings IEEE 2003 Powerteck Bologna Conference*, A. Borghetti,
C.A. Nucci and M. Paolone editors, Paper n. **547**, 2003

**Abstract:** The paper describes a procedure developed to assist a
generating company in choosing the most convenient bidding strategies for a
day-ahead electricity energy market. According to the proposed method, the
profit maximization problem is transformed into a minimization problem that can
be solved by a traditional hydro-thermal unit commitment program after
implementing a few modifications. The paper describes the modifications
introduced in a unit commitment program based on the Lagrangian relaxation
approach and on a disaggregated Bundle method for the solution of the dual
problem. It also presents some results obtained for a realistic data set of
hydrothermal power plants. The results are discussed in order to emphasize how
the method can be applied to assess the bidding strategy choice of a given
company.

**Keywords:** Unit Commitment, Electricity Market, Bidding Strategies

*INFORMS Journal on Computing* **15**(4), 369–384, 2003

**Abstract:** We study the coarse-grained parallelization of an efficient
bundle-based cost-decomposition algorithm for the solution of multicommodity
min-cost flow (MMCF) problems. We show that a code exploiting only the natural
parallelism inherent in the cost-decomposition approach, i.e., solving the
min-cost flow subproblems in parallel, obtains satisfactory efficiencies even
with many processors on large, difficult MMCF problems with many commodities.
This is exactly the class of instances where the decomposition approach attains
its best results in sequential. The parallel code we developed is highly
portable and flexible, and it can be used on different machines. We also show
how to exploit a common characteristic of current supercomputer facilities,
i.e., the side-to-side availability of massively parallel and vector
supercomputers, to implement an asymmetric decomposition algorithm where
each architecture is used for the tasks for which it is best suited.

**Keywords:** Networks-Graphs, Multicommodity; Programming,
Large-Scale Systems; Programming, Nondifferentiable

*Atti della Scuola CIRO 2002*, A. Agnetis and G. Di Pillo editors,
159–264, Pitagora Editrice, 2003

**Abstract:** Una tecnica molto diffusa per costruire rilassamenti di
problemi di Programmazione Lineare Intera (PLI) o mista, ed anche per
risolvere problemi di Programmazione Lineare (PL) di grande dimensione, è
quella del rilassamento Lagrangiano. È ben noto che il duale Lagrangiano
di un problema di PL è equivalente al duale lineare del problema, e
quindi al problema stesso, mentre il duale Lagrangiano di un problema di PLI
è equivalente al "rilassamento convessificato" del problema in cui appare
l'inviluppo convesso dei punti interi che soddisfano i vincoli non rilassati,
ossia una rappresentazione poliedrale della regione ammissibile del rilassamento
Lagrangiano. È forse meno noto che gli algoritmi utilizzati per risolvere
duali Lagrangiani calcolano automaticamente, o possono farlo con modifiche
minori, una soluzione (approssimata) del rilassamento convessificato, fornendo
quindi informazione primale che può risultare molto utile quando queste
tecniche sono utilizzate all'interno di approcci, esatti o euristici, per
risolvere il problema di PLI originale. Mostreremo come questo sia il caso per
l'algoritmo dei piani di taglio, noto nel contesto della PL come metodo di
decomposizione di Dantzig-Wolfe, e per molte sue varianti, così come per
i più recenti algoritmi del tipo subgradiente. Discuteremo poi il
possibile uso dell'informazione primale così ottenuta all'interno di
approcci per la PLI.

**Keywords:** Tecniche di Decomposizione, Rilassamenti Lagrangiani,
Programmazione Lineare Intera.

*Journal of Computational Biology*, **10**(5), 791–802,
2003

**Abstract:** Genes belonging to the same organism are called paralogs
when they show a significant similarity in the sequences, even if they have a
different biological function. It is an emergent biological paradigm that the
families of paralogs in a genome derive from a mechanism of iterated gene
duplication-with-modification. In order to investigate the evolution of
organisms, it can be useful to infer the duplications that have taken place
starting from an hypothetical original gene, and that have led to the current
paralog genes family. This information can naturally be represented in a
*paralogy tree*. Here we present a method, called PaTre, to generate a
paralogy tree from a family of paralog genes. PaTre uses new techniques
motivated by the specific application. The reliability of the inferential
process is tested by means of a simulator that implements different hypotheses
on the duplication-with-modification paradigm, and checked on real data.

**Keywords:** Paralog Genes, Paralogy Trees, Transformation Distance,
Lightest Spanning Arborescence

*SIAM Journal on Optimization* 13(1), 117–156, 2002

**Abstract:** We study a class of generalized bundle methods for which
the stabilizing term can be any closed convex function satisfying certain
properties. This setting covers several algorithms from the literature that
have been so far regarded as distinct. Under a different hypothesis on the
stabilizing term and/or the function to be minimized, we prove finite
termination, asymptotic convergence, and finite convergence to an optimal
point, with or without limits on the number of serious steps and/or requiring
the proximal parameter to go to infinity. The convergence proofs leave a high
degree of freedom in the crucial implementative features of the algorithm,
i.e., the management of the bundle of subgradients (β-strategy) and of the
proximal parameter (*t*-strategy). We extensively exploit a dual view of
bundle methods, which are shown to be a dual ascent approach to one nonlinear
problem in an appropriate dual space, where nonlinear subproblems are
approximately solved at each step with an inner linearization approach. This
allows us to precisely characterize the changes in the subproblems during the
serious steps, since the dual problem is not tied to the local concept of
ε-subdifferential. For some of the proofs, a generalization of
inf-compactness, called *-compactness, is required; this concept is related
to that of asymptotically well-behaved functions.

**Keywords:** Nondifferentiable Optimization, Bundle Methods

The published paper is available at http://dx.doi.org/10.1137/S1052623498342186. Due to the provisions of SIAM Copyright Assignment Agreement, it can also be downloaded here.

in *Proceedings IEEE 2001 Powerteck Porto Conference*, J.T. Saraiva and
M.A. Matos editors, Vol. **3**, Paper n. **PSO5-397**, 2001

**Abstract:** The paper deals with the solution of the optimal short-term
unit commitment (UC) problem in an electric power system. The optimization
model takes into account the main operating constraints and physical
characteristics of the power generation system. A comparison between Lagrangian
heuristics and Tabu Search techniques on different classes of realistic
instances is presented. Such a comparison is aimed at highlighting the strong
features and the weaknesses of each technique, in view of their application in
computer models for competitive electricity markets in progressive evolution.
The comparison may provide insights for the construction of hybrid techniques
that incorporate the best of both approaches.

**Keywords:** Lagrangian Relaxation, Optimization Methods,
Power Generation Dispatch, Tabu Search, Unit Commitment

in *Lecture Notes in Computer Science* **1981**, Vector and
Parallel Processing - VECPAR 2000, J.M. Palma. J. Dongarra and V. Hernandez
(Eds.), Springer-Verlag, 301–315, 2001

**Abstract:** A parallel implementation of the specialized interior-point
algorithm for multicommodity network fows introduced in [5] is presented. In
this algorithm, the positive defnite systems of each iteration are solved
through a scheme that combines direct factorization and a preconditioned
conjugate gradient (PCG) method. Since the solution of at least k independent
linear systems is required at each iteration of the PCG, k being the number of
commodities, a coarse-grained parallellization of the algorithm naturally
arises, where these systems are solved on different processors. Also, several
other minor steps of the algorithm are easily parallelized by commodity. An
extensive set of computational results on a shared memory machine is presented,
using problems of up to 2.5 million variables and 260,000 constraints. The
results show that the approach is especially competitive on large, diffcult
multicommodity flow problems.

**Keywords:** Multicommodity Flows, Interior-Point Algorithms,
Preconditioned Conjugate Gradient (PCG) Method,
Parallel Computing

The original publication is available at www.springerlink.com. Thanks to the copyright agreement of Lecture Notes on Computer Science, you can download the author's final version here.

*SIAM Journal on Matrix Analysis and Applications* **23**(2),
339–348, 2001

**Abstract:** We study the extreme singular values of incidence graph
matrices, obtaining lower and upper estimates that are asymptotically tight.
This analysis is then used for obtaining estimates on the spectral condition
number of some weighted graph matrices. A short discussion on possible
preconditioning strategies within interior-point methods for network flow
problems is also included.

**Keywords:**Graph Matrices, Conditioning and Preconditioning

*Discrete Applied Mathematics*, **112**(1-3), 73–99, 2001

**Abstract:**To efficiently derive bounds for large-scale instances of the
capacitated fixed-charge network design problem, Lagrangian relaxations appear
promising. This paper presents the results of comprehensive experiments aimed
at calibrating and comparing bundle and subgradient methods applied to the
optimization of Lagrangian duals arising from two Lagrangian relaxations. This
study substantiates the fact that bundle methods appear superior to subgradient
approaches because they converge faster and are more robust relative to
different relaxations, problem characteristics, and selection of the initial
parameter values. It also demonstrates that effective lower bounds may be
computed efficiently for large-scale instances of the capacitated fixed-charge
network design problem. Indeed, in a fraction of the time required by a
standard simplex approach to solve the linear programming relaxation, the
methods we present attain very high quality solutions.

**Keywords:** Multicommodity Capacitated Fixed-Charge Network Design,
Lagrangian Relaxation, Subgradient Methods,
Bundle Methods

The published paper is available at http://dx.doi.org/10.1016/S0166-218X(00)00310-3. A Technical Report version is available here.

*Technical Report* **00-09**, Dipartimento di Informatica,
Università di Pisa, 2000

**Abstract:** An effective Branch and Bound algorithm based on Lagrangian
heuristic is presented, which aims at reducing the gap between the lower and
upper bounds. The Branch and Bound method proposed exploits the information
gathered while going down in the enumeration tree in order to solve effciently
the subproblems related to other nodes. This is accomplished by using a bundle
method to solve at each node the Lagrangian dual. A discrete combined
location-routing model, which we refer to as Obnoxious Facility Location and
Routing model (OFLR), is used as a benchmark to validate the effciency of the
method proposed. Such model simultaneously locates obnoxious facilities and
routes obnoxious materials between a set of built-up areas and the facilities.
Obnoxious facilities are those facilities which cause nuisance to people as
well as to the environment i.e. dump sites, chemical industrial plants,
electric power supplier networks, nuclear reactors and so on. OFLR is a
particular case of the Obnoxious Network Design problem which occurs when also
an obnoxious network has to be designed such as the case of electric power
supplier networks. These are meaningful problems given the industrial
development and the increasing importance of environmental issues in our
society. The use of the B&B to solve this latter problem is under
development.

Chapter **1** in *Telecommunications Network Planning*, P. Soriano
and B. Sanso editors, Kluwer Academics Publisher, 1–19, 1999

**Abstract:** This paper presents a comprehensive survey of models and
algorithms for multicommodity capacitated network design problems, which are
mostly encountered in telecommunications and transportation network planning.
These problems are important not only due to the major relevance of their
applications, but also because they pose considerable modeling and algorithmic
challenges. We present a general arc-based model, describe useful alternative
formulations and survey the literature on simplex-based cutting plane and
Lagrangian relaxation approaches. We then focus on our own contributions that
develop and compare several relaxation methods for a particular case of this
model, the fixed-charge problem. These methods are based on Lagrangian
relaxation and nondifferentiable optimization techniques, namely, the
subgradient and bundle approaches. Our experimental results, while very
encouraging, indicate that solving effciently these diffcult problems requires
a judicious combination of cutting planes, Lagrangian relaxation methods and
sophisticated heuristics. In addition, due to their inherent decomposition
properties, these techniques can be adapted to parallel computing environments,
which is highly desirable in order to solve realistically sized instances.

**Keywords:** Multicommodity Capacitated Network Design, Cutting Planes,
Lagrangian Relaxation, Nondifferentiable Optimization,
Parallel Computing

*INFORMS Journal On Computing* **11**(4), 370–393, 1999

**Abstract:** We present a Cost Decomposition approach for the linear
Multicommodity Min-Cost Flow problem, where the mutual capacity constraints
are dualized and the resulting Lagrangian Dual is solved with a dual-ascent
algorithm belonging to the class of Bundle methods. Although decomposition
approaches to block-structured Linear Programs have been reported not to be
competitive with general-purpose software, our extensive computational
comparison shows that, when carefully implemented, a decomposition algorithm
can outperform several other approaches, especially on problems where the
number of commodities is "large" with respect to the size of the graph. Our
specialized Bundle algorithm is characterized by a new heuristic for the
trust region parameter handling, and embeds a specialized Quadratic Program
solver that allows the efficient implementation of strategies for reducing
the number of active Lagrangian variables. We also exploit the structural
properties of the single-commodity Min-Cost Flow subproblems to reduce the
overall computational cost. The proposed approach can be easily extended
to handle variants of the problem.

**Keywords:** Multicommodity Flows, Bundle methods, Decomposition Methods,
Lagrangian Dual

*Technical Report* **99-05**, Dipartimento di Informatica,
Università di Pisa, 1999

**Abstract:** We propose some techniques, based on minimum cost flow
computations, for efficiently computing lower bounds for the Capacitated
Minimum Spanning Tree problem (CMST). With respect to other methods proposed in
the literature, our approaches often provide comparable bounds, with a
substantially lower computational cost. For this reason, and for other features
discussed in the paper, the proposed methods may be particularly promising in
the context of exact algorithms for CMST.

**Keywords:** Capacitated Minimum Spanning Tree, Minimum Cost Flow,
Lagrangian Relaxations

*Computers & Operations Research* **23**(11), 1099–1118,
1996

**Abstract:** Bundle methods for Nondifferentiable Optimization are widely
recognised as one of the best choices for the solution of Lagrangian Duals; one
of their major drawbacks is that they require the solution of a Semidefinite
Quadratic Programming subproblem at every iteration. We present an active-set
method for the solution of such problems, that enhances upon the ones in the
literature by distinguishing among bases with different properties and
exploiting their structure in order to reduce the computational cost of the
basic step. Furthermore, we show how the algorithm can be adapted to the
several needs that arises in practice within Bundle algorithms; we describe how
it is possible to allow constraints on the primal direction, how special (box)
constraints can be more efficiently dealt with and how to accommodate changes
in the number of variables of the nondifferentiable function. Finally, we
describe the important implementation issues, and we report some computational
experience to show that the algorithm is competitive with other QP codes when
used within a Bundle code for the solution of Lagrangian Duals of large-scale
(Integer) Linear Programs.

**Keywords:** Nondifferentiable Optimization, Bundle Methods,
Quadratic Programming, Least Squares

The published paper is available at http://dx.doi.org/10.1016/0305-0548(96)00006-8. A Technical Report version is available here.

*Ricerca Operativa* **XXV**, n. 74, 5–49, 1995

**Abstract:** Among practical methods for large-scale Nondifferentiable
Optimization (NDO), Bundle methods are widely recognized to play a relevant
role; despite thier potential, however, they are not often utilized for
maximization of polyhedral functions, that appears in many different contexts
such as Lagrangian Duals and decomposition algorithms, since simpler-to-program
but less efficient approaches like subgradient methods are preferred. The aim
of this work is to provide an applications-oriented survey of the theory of
Bundle methods when applied to problems arising in continuous and combinatorial
optimization, with an introduction to the several variants of Bundle approaches
that can be built up by using a limited set of basic concepts and tools.

**Keywords:** Nondifferentiable Optimization, Bundle methods,
Lagrangian Duals

*European Journal of Operational Research* **82**(3), 615–646,
1995

**Abstract:** We extend some known results about the Bilevel
Linear Problem (BLP), a hierarchical two-stage optimization problem, showing
how it can be used to reformulate any Mixed Integer (Linear) Problem; then, we
introduce some new concepts, which might be useful to fasten almost all the
known algorithms devised for BLP. As this kind of reformulation appears to be
somewhat artificial, we define a natural generalization of BLP, the Bilevel
Linear/Quadratic Problem (BL/QP), and show that most of the exact and/or
approximate algorithms originally devised for the BLP, such as GSA or K-th
Best, can be extended to this new class of Bilevel Programming Problems. For
BL/QP, more "natural" reformulations of MIPs are available, leading to the use
of known (nonexact) algorithms for BLP as (heuristic) approaches to MIPs: we
report some contrasting results obtained in the Network Design Problem case,
showing that, although the direct application of our (Dual) GSA algorithm is
not of any practical use, we obtain as a by-product a good theoretical
characterization of the optimal solutions set of the NDP, along with a powerful
scheme for constructing fast algorithms for the Minimum Cost Flow Problem with
piecewise convex linear cost functions.

**Keywords:** Bilevel Problems, Integer Programming,
Network Programming

The published paper is available at http://dx.doi.org/10.1016/0377-2217(93)E0217-L. No Technical Report was done at the time; I aopolgize but I was young, I haden't learnt the trick already. Should you fancy a copy, please be sure to ask.