Namespaces
Variants
Actions

Difference between revisions of "Parametric programming"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
(latex details)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 +
<!--
 +
p0715201.png
 +
$#A+1 = 37 n = 0
 +
$#C+1 = 37 : ~/encyclopedia/old_files/data/P071/P.0701520 Parametric programming
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
The branch of [[Mathematical programming|mathematical programming]] concerned with the investigation of optimization problems in which the conditions of feasibility and (or) the objective function depend on one or several deterministic parameters. (Problems in which these parameters are random form the object of [[Stochastic programming|stochastic programming]].)
 
The branch of [[Mathematical programming|mathematical programming]] concerned with the investigation of optimization problems in which the conditions of feasibility and (or) the objective function depend on one or several deterministic parameters. (Problems in which these parameters are random form the object of [[Stochastic programming|stochastic programming]].)
  
In its general form a parametric programming problem consists of maximizing an objective function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p0715201.png" /> over all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p0715202.png" /> subject to constraints
+
In its general form a parametric programming problem consists of maximizing an objective function $  f( x, \lambda ) $
 +
over all $  x = ( x _ {1} \dots x _ {n} ) \in \mathbf R  ^ {n} $
 +
subject to constraints
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p0715203.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
g _ {i} ( x, \lambda )  \leq  b _ {i} ( \lambda ),\ \
 +
i = 1 \dots m,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p0715204.png" /> is the vector of parameters, belonging to a given parameter set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p0715205.png" />. For any fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p0715206.png" /> this is an ordinary mathematical programming problem. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p0715207.png" /> be the set of those values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p0715208.png" /> for which this problem is solvable (the solvability set). An optimal solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p0715209.png" /> is a function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152010.png" /> in a natural way. By the solution of a parametric programming problem one means the family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152011.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152012.png" />.
+
where $  \lambda $
 +
is the vector of parameters, belonging to a given parameter set $  \Lambda \subset  \mathbf R  ^ {p} $.  
 +
For any fixed $  \lambda $
 +
this is an ordinary mathematical programming problem. Let $  \Lambda  ^  \prime  \subset  \Lambda $
 +
be the set of those values $  \lambda $
 +
for which this problem is solvable (the solvability set). An optimal solution $  x  ^  \star  = x _  \lambda  ^  \star  $
 +
is a function of $  \lambda $
 +
in a natural way. By the solution of a parametric programming problem one means the family $  \{ x _  \lambda  ^  \star  \} $
 +
for all $  \lambda \in \Lambda  ^  \prime  $.
  
 
The sources of parametric programming problems are fairly diverse. First of all, there is the tendency to reflect a certain arbitrariness with which all or some initial data of a practical optimization problem frequently happen to be defined, or to comprise in a single formulation several versions of a problem (or a whole family of problems depending, for example, on time). Parametric programming is the most adequate way of stating the important problem of stability of solutions of an optimization problem under variations of the initial data. Finally, closely connected with parametric programming problems is the problem of finding the set of Pareto optima in problems of multi-criterion optimization.
 
The sources of parametric programming problems are fairly diverse. First of all, there is the tendency to reflect a certain arbitrariness with which all or some initial data of a practical optimization problem frequently happen to be defined, or to comprise in a single formulation several versions of a problem (or a whole family of problems depending, for example, on time). Parametric programming is the most adequate way of stating the important problem of stability of solutions of an optimization problem under variations of the initial data. Finally, closely connected with parametric programming problems is the problem of finding the set of Pareto optima in problems of multi-criterion optimization.
  
If for any fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152013.png" /> a parametric programming problem is a [[Linear programming|linear programming]] ([[Convex programming|convex programming]], etc.) problem, then one speaks of a linear (or convex, etc.) parametric programming problem.
+
If for any fixed $  \lambda $
 +
a parametric programming problem is a [[Linear programming|linear programming]] ([[Convex programming|convex programming]], etc.) problem, then one speaks of a linear (or convex, etc.) parametric programming problem.
  
In general form the analysis of parametric programming problems has the following aims. 1) To find and clarify properties of the solvability sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152014.png" />. 2) To find the domains of stability of solutions and to characterize their structure; also, to analyze the behaviour of unstable problems. 3) To characterize the dependence of an optimal value of the objective function on the vector of parameters.
+
In general form the analysis of parametric programming problems has the following aims. 1) To find and clarify properties of the solvability sets $  \Lambda  ^  \prime  $.  
 +
2) To find the domains of stability of solutions and to characterize their structure; also, to analyze the behaviour of unstable problems. 3) To characterize the dependence of an optimal value of the objective function on the vector of parameters.
  
In full generality (that is, for arbitrary objective functions, constraints and domains <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152015.png" /> of variation of the parameters) these problems are very difficult. Fairly well advanced in theoretical and computational respect are only certain classes of problems. This concerns mainly linear parametric programming problems in which either: a) the target function depends linearly on a single scalar parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152016.png" />; or b) the right-hand sides of the constraints depend linearly on one parameter; or c) the objective function and right-hand sides of the constraints depend linearly on two independent scalar parameters or on a single parameter; or d) the objective function depends linearly on a vector parameter; or e) the right-hand sides of the constraints depend linearly on a vector parameter. The case when the matrix of constraints of a linear programming problem depends on parameters is very complicated and has so far (1990) not yet been sufficiently well investigated. In the case a), for example, the solutions of the problems 1)–3) above can be characterized as follows. It is required to maximize
+
In full generality (that is, for arbitrary objective functions, constraints and domains $  \Lambda $
 +
of variation of the parameters) these problems are very difficult. Fairly well advanced in theoretical and computational respect are only certain classes of problems. This concerns mainly linear parametric programming problems in which either: a) the target function depends linearly on a single scalar parameter $  ( \Lambda = \mathbf R  ^ {1} ) $;  
 +
or b) the right-hand sides of the constraints depend linearly on one parameter; or c) the objective function and right-hand sides of the constraints depend linearly on two independent scalar parameters or on a single parameter; or d) the objective function depends linearly on a vector parameter; or e) the right-hand sides of the constraints depend linearly on a vector parameter. The case when the matrix of constraints of a linear programming problem depends on parameters is very complicated and has so far (1990) not yet been sufficiently well investigated. In the case a), for example, the solutions of the problems 1)–3) above can be characterized as follows. It is required to maximize
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152017.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
\sum_{j=1}^ { n }  ( c _ {j} + \lambda c _ {j}  ^  \prime  ) x _ {j}  $$
  
 
under the constraints
 
under the constraints
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152018.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
\sum_{j=1}^n  a _ {ij} x _ {j}  \leq  b _ {i} ,\ \
 +
i = 1 \dots m; \ \
 +
x _ {j}  \geq  0,\ \
 +
j = 1 \dots n,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152019.png" />. Then there is a partition of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152020.png" /> into finitely many left-open intervals: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152021.png" /> (where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152022.png" /> is unbounded to the left, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152023.png" /> is unbounded to the right, and one of them can coincide with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152024.png" />) such that for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152025.png" /> the corresponding linear programming problem is solvable and such that on every interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152026.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152027.png" />, it has one and the same basis. The only exceptions can be <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152028.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152029.png" />, on which the objective function (2) may be unbounded. Thus, in a given problem the solvability set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152030.png" /> is the union of all the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152031.png" /> with the possible exception of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152032.png" /> and (or) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152033.png" />. Next, the optimal value of the objective function on each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152034.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152035.png" />, is a convex piecewise-linear function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152036.png" />.
+
where $  \lambda \in \Lambda = \mathbf R  ^ {1} $.  
 +
Then there is a partition of $  \Lambda $
 +
into finitely many left-open intervals: $  \Lambda = \cup_{k=1}^ {q} \Lambda  ^ {k} $(
 +
where $  \Lambda  ^ {1} $
 +
is unbounded to the left, $  \Lambda  ^ {q} $
 +
is unbounded to the right, and one of them can coincide with $  \Lambda $)  
 +
such that for all $  \lambda \in \Lambda  ^ {k} $
 +
the corresponding linear programming problem is solvable and such that on every interval $  \Lambda  ^ {k} $,  
 +
$  k = 1 \dots q $,  
 +
it has one and the same basis. The only exceptions can be $  \Lambda  ^ {1} $
 +
and $  \Lambda  ^ {q} $,  
 +
on which the objective function (2) may be unbounded. Thus, in a given problem the solvability set $  \Lambda  ^  \prime  $
 +
is the union of all the $  \Lambda  ^ {k} $
 +
with the possible exception of $  \Lambda  ^ {1} $
 +
and (or) $  \Lambda  ^ {q} $.  
 +
Next, the optimal value of the objective function on each $  \Lambda  ^ {k} $,  
 +
$  k = 1 \dots q $,  
 +
is a convex piecewise-linear function of $  \lambda $.
  
Numerical methods for solving one-parameter problems in linear parametric programming <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/p/p071/p071520/p07152037.png" /> are modifications of the [[Simplex method|simplex method]]; in the case of a multi-dimensional parameter space one has to use more involved arguments. In recent years, many important results have been obtained in non-linear (especially quadratic and convex) parametric programming.
+
Numerical methods for solving one-parameter problems in linear parametric programming $  ( \Lambda = \mathbf R  ^ {1} ) $
 +
are modifications of the [[Simplex method|simplex method]]; in the case of a multi-dimensional parameter space one has to use more involved arguments. In recent years, many important results have been obtained in non-linear (especially quadratic and convex) parametric programming.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  B. Bank,  J. Guddat,  D. Klatte,  B. Kummer,  K. Tammer,  "Nonlinear parametric optimization" , Birkhäuser  (1983)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  F. Nožička,  J. Guddat,  H. Hollatz,  "Theorie der linearen parametrischen Optimierung" , Akademie Verlag  (1974)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  B. Bank,  J. Guddat,  D. Klatte,  B. Kummer,  K. Tammer,  "Nonlinear parametric optimization" , Birkhäuser  (1983)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  F. Nožička,  J. Guddat,  H. Hollatz,  "Theorie der linearen parametrischen Optimierung" , Akademie Verlag  (1974)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Latest revision as of 16:44, 20 January 2024


The branch of mathematical programming concerned with the investigation of optimization problems in which the conditions of feasibility and (or) the objective function depend on one or several deterministic parameters. (Problems in which these parameters are random form the object of stochastic programming.)

In its general form a parametric programming problem consists of maximizing an objective function $ f( x, \lambda ) $ over all $ x = ( x _ {1} \dots x _ {n} ) \in \mathbf R ^ {n} $ subject to constraints

$$ \tag{1 } g _ {i} ( x, \lambda ) \leq b _ {i} ( \lambda ),\ \ i = 1 \dots m, $$

where $ \lambda $ is the vector of parameters, belonging to a given parameter set $ \Lambda \subset \mathbf R ^ {p} $. For any fixed $ \lambda $ this is an ordinary mathematical programming problem. Let $ \Lambda ^ \prime \subset \Lambda $ be the set of those values $ \lambda $ for which this problem is solvable (the solvability set). An optimal solution $ x ^ \star = x _ \lambda ^ \star $ is a function of $ \lambda $ in a natural way. By the solution of a parametric programming problem one means the family $ \{ x _ \lambda ^ \star \} $ for all $ \lambda \in \Lambda ^ \prime $.

The sources of parametric programming problems are fairly diverse. First of all, there is the tendency to reflect a certain arbitrariness with which all or some initial data of a practical optimization problem frequently happen to be defined, or to comprise in a single formulation several versions of a problem (or a whole family of problems depending, for example, on time). Parametric programming is the most adequate way of stating the important problem of stability of solutions of an optimization problem under variations of the initial data. Finally, closely connected with parametric programming problems is the problem of finding the set of Pareto optima in problems of multi-criterion optimization.

If for any fixed $ \lambda $ a parametric programming problem is a linear programming (convex programming, etc.) problem, then one speaks of a linear (or convex, etc.) parametric programming problem.

In general form the analysis of parametric programming problems has the following aims. 1) To find and clarify properties of the solvability sets $ \Lambda ^ \prime $. 2) To find the domains of stability of solutions and to characterize their structure; also, to analyze the behaviour of unstable problems. 3) To characterize the dependence of an optimal value of the objective function on the vector of parameters.

In full generality (that is, for arbitrary objective functions, constraints and domains $ \Lambda $ of variation of the parameters) these problems are very difficult. Fairly well advanced in theoretical and computational respect are only certain classes of problems. This concerns mainly linear parametric programming problems in which either: a) the target function depends linearly on a single scalar parameter $ ( \Lambda = \mathbf R ^ {1} ) $; or b) the right-hand sides of the constraints depend linearly on one parameter; or c) the objective function and right-hand sides of the constraints depend linearly on two independent scalar parameters or on a single parameter; or d) the objective function depends linearly on a vector parameter; or e) the right-hand sides of the constraints depend linearly on a vector parameter. The case when the matrix of constraints of a linear programming problem depends on parameters is very complicated and has so far (1990) not yet been sufficiently well investigated. In the case a), for example, the solutions of the problems 1)–3) above can be characterized as follows. It is required to maximize

$$ \tag{2 } \sum_{j=1}^ { n } ( c _ {j} + \lambda c _ {j} ^ \prime ) x _ {j} $$

under the constraints

$$ \tag{3 } \sum_{j=1}^n a _ {ij} x _ {j} \leq b _ {i} ,\ \ i = 1 \dots m; \ \ x _ {j} \geq 0,\ \ j = 1 \dots n, $$

where $ \lambda \in \Lambda = \mathbf R ^ {1} $. Then there is a partition of $ \Lambda $ into finitely many left-open intervals: $ \Lambda = \cup_{k=1}^ {q} \Lambda ^ {k} $( where $ \Lambda ^ {1} $ is unbounded to the left, $ \Lambda ^ {q} $ is unbounded to the right, and one of them can coincide with $ \Lambda $) such that for all $ \lambda \in \Lambda ^ {k} $ the corresponding linear programming problem is solvable and such that on every interval $ \Lambda ^ {k} $, $ k = 1 \dots q $, it has one and the same basis. The only exceptions can be $ \Lambda ^ {1} $ and $ \Lambda ^ {q} $, on which the objective function (2) may be unbounded. Thus, in a given problem the solvability set $ \Lambda ^ \prime $ is the union of all the $ \Lambda ^ {k} $ with the possible exception of $ \Lambda ^ {1} $ and (or) $ \Lambda ^ {q} $. Next, the optimal value of the objective function on each $ \Lambda ^ {k} $, $ k = 1 \dots q $, is a convex piecewise-linear function of $ \lambda $.

Numerical methods for solving one-parameter problems in linear parametric programming $ ( \Lambda = \mathbf R ^ {1} ) $ are modifications of the simplex method; in the case of a multi-dimensional parameter space one has to use more involved arguments. In recent years, many important results have been obtained in non-linear (especially quadratic and convex) parametric programming.

References

[1] B. Bank, J. Guddat, D. Klatte, B. Kummer, K. Tammer, "Nonlinear parametric optimization" , Birkhäuser (1983)
[2] F. Nožička, J. Guddat, H. Hollatz, "Theorie der linearen parametrischen Optimierung" , Akademie Verlag (1974)

Comments

Closely related to parametric programming and the stability analysis of programming problems is sensitivity analysis, i.e. the study of the sensitivity of solutions to changes in the parameters defining the programming problem.

References

[a1] W. Dinkelbach, "Sensitivätsanalysen und parametrische Programmierung" , Springer (1969)
[a2] S. Zionts, "Linear and integer programming" , Prentice-Hall (1974)
[a3] A.M. Geoffrion, "Strictly concave parametric programming I, II" Management Science , 13 (1966/67) pp. 244–253; 359–370
How to Cite This Entry:
Parametric programming. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Parametric_programming&oldid=13537
This article was adapted from an original article by A.A. Korbut (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article