Descent and penalization techniques for equilibrium problems with nonlinear constraints
Giancarlo Bigi and Mauro Passacantando
Summary
The equilibrium problem provides a general setting to formulate a large number of problems such as scalar and vector optimization, variational inequality, fixed point, complementarity, saddle points and noncooperative games in a unique format. A huge number of applications have been developed in different areas such as economics, environment, transportation, information technology and telecommunications: some recent papers focused on oligopolistic and spatial price markets, auction and financial markets, risk management, climate policies, traffic and pricing over telecommunication networks or over public roads, clouding computing, power allocation in radio systems, internet advertising.
Several kinds of methods to solve equilibrium problems have been proposed: one popular approach relies on the reformulation of the equilibrium problem as an optimization problem through appropriate gap or D-gap functions: many ad hoc descent methods for minimizing the chosen gap function have been developed. Most of them require the computation of the minimum of a convex function over the (convex) feasible region of the equilibrium problem just to evaluate the gap function at a given point. Therefore, this evaluation could be computationally expensive when the feasible region is described
by nonlinear (convex) constraints.
Recently, a gap function, which uses a polyhedral approximation of the feasible region, has been introduced here. This paper introduces two descent methods for solving equilibrium problems with nonlinear constraints, exploiting this new gap function. They are both based on a search direction which could be unfeasible, unlike most of the known algorithms. Therefore, some penalization techniques are needed: an exact penalty term is introduced and a descent direction for the penalized gap function is available, provided that the penalization parameter is small enough. It is
worthy to remark that the penalization parameter is updated throughout the iterations of the algorithms whenever it is needed.
These new methods have some better features than most of the available descent methods. At each iteration, a convex optimization problem with linear constraints is solved instead of a convex problem with nonlinear constraints. Moreover, the key assumption for convergence is weaker than those for teh most widespread methods. Finally, the parameters are bounded away from zero while they might go to zero, and thus give numerical instability, in the other methods which perform parameters' updates.
The paper is organized as follows: Sect. 2 recalls some well-known definitions and results about gap functions for equilibrium problems, and a key lemma on the penalized gap function is proved. Section 3 provides the two new solution methods and their convergence is proved under standard assumptions. Finally, Sect. 4 contains some final remarks and comparisons between the new methods and those already available in the literature.
The original paper is available at http://dx.doi.org/10.1007/s10957-013-0473-7.
A Technical Report version is available
here.
My Home Page