Namespaces
Variants
Actions

Difference between revisions of "Young measure"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (MR/ZBL numbers added)
m (AUTOMATIC EDIT (latexlist): Replaced 23 formulas out of 24 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
Line 1: Line 1:
 +
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
 +
 +
Out of 24 formulas, 23 were replaced by TEX code.-->
 +
 +
{{TEX|semi-auto}}{{TEX|partial}}
 
''parametrized measure, relaxed control, stochastic kernel''
 
''parametrized measure, relaxed control, stochastic kernel''
  
A family of probability measures <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y1200401.png" />, one for each point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y1200402.png" /> in a domain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y1200403.png" /> (cf. also [[Probability measure|Probability measure]]), associated to a sequence of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y1200404.png" /> with the fundamental property that
+
A family of probability measures $\nu = \{ \nu _ { X } \} _ { X \in \Omega }$, one for each point $x$ in a domain $\Omega$ (cf. also [[Probability measure|Probability measure]]), associated to a sequence of functions $f _ { j } : \Omega \rightarrow {\bf R} ^ { d }$ with the fundamental property that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y1200405.png" /></td> </tr></table>
+
\begin{equation*} \operatorname { lim } _ { j \rightarrow \infty } \int _ { \Omega } \varphi ( x , f_j ( x ) ) d x = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y1200406.png" /></td> </tr></table>
+
\begin{equation*} = \int _ { \Omega } \int _ { \mathbf{R} ^ { d } } \varphi ( x , \lambda ) d \nu _ { x } ( \lambda ) d x, \end{equation*}
  
for any Carathéodory function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y1200407.png" />. The Young measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y1200408.png" /> depends upon the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y1200409.png" /> but is independent of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004010.png" /> ([[#References|[a1]]], [[#References|[a5]]], [[#References|[a8]]]).
+
for any Carathéodory function $\varphi$. The Young measure $\nu$ depends upon the sequence $\{ f_j \}$ but is independent of $\varphi$ ([[#References|[a1]]], [[#References|[a5]]], [[#References|[a8]]]).
  
 
The main area where Young measures have recently been used is optimization theory. Optimization problems where a local, integral cost functional is to be minimized in a suitable class of functions often lack optimal solutions because of the presence of some non-convexity. In such cases, a single function is unable to reproduce the optimal behaviour, due precisely to this lack of optimal solutions, and one must resort to sequences (the so-called minimizing sequences) in order to comprehend the main features of optimality. From the horizon of the above-mentioned class of cost functionals, Young measures furnish a convenient way of dealing with optimal behaviour paying attention only to those features that make a behaviour optimal and disregarding accidental properties.
 
The main area where Young measures have recently been used is optimization theory. Optimization problems where a local, integral cost functional is to be minimized in a suitable class of functions often lack optimal solutions because of the presence of some non-convexity. In such cases, a single function is unable to reproduce the optimal behaviour, due precisely to this lack of optimal solutions, and one must resort to sequences (the so-called minimizing sequences) in order to comprehend the main features of optimality. From the horizon of the above-mentioned class of cost functionals, Young measures furnish a convenient way of dealing with optimal behaviour paying attention only to those features that make a behaviour optimal and disregarding accidental properties.
Line 13: Line 21:
 
The way in which this process is accomplished can be described as follows. Let
 
The way in which this process is accomplished can be described as follows. Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004011.png" /></td> </tr></table>
+
\begin{equation*} I : \mathcal{A} \rightarrow \mathbf{R} \cup \{ + \infty \} \end{equation*}
  
be a local, integral cost functional defined on an admissible class of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004012.png" />. Typically,
+
be a local, integral cost functional defined on an admissible class of functions $\mathcal{A}$. Typically,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004013.png" /></td> </tr></table>
+
\begin{equation*} I ( u ) = \int _ { \Omega } F ( x , u ( x ) , \nabla u ( x ) , \ldots ) d x, \end{equation*}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004014.png" /> indicate higher-order derivatives. The optimization problem of interest is to comprehend how the infimum
+
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004014.png"/> indicate higher-order derivatives. The optimization problem of interest is to comprehend how the infimum
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004015.png" /></td> </tr></table>
+
\begin{equation*} \operatorname { inf } _ { u \in \mathcal{A} } I ( u ) \end{equation*}
  
 
is realized. One introduces a generalized optimization problem, intimately connected to the one above, by putting
 
is realized. One introduces a generalized optimization problem, intimately connected to the one above, by putting
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004016.png" /></td> </tr></table>
+
\begin{equation*} \tilde{I} ( \nu ) = \operatorname { lim } _ { j \rightarrow \infty } I ( u _ { j } ) \end{equation*}
  
when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004017.png" /> is the Young measure associated to the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004018.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004019.png" /> stands for the set of all such Young measures, one would like to understand the optimal behaviour for
+
when $\nu$ is the Young measure associated to the sequence $\{ u _ { j } \} \subset \mathcal{A}$. If $\tilde { A }$ stands for the set of all such Young measures, one would like to understand the optimal behaviour for
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004020.png" /></td> </tr></table>
+
\begin{equation*} \operatorname { inf } _ { \nu \in \tilde{A} } \tilde{I} ( \nu ). \end{equation*}
  
Due to the fundamental property of the Young measure indicated above, the optimal behaviour for this new optimization problem can always be described with a single element in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004021.png" />, which in turn is generated by minimizing sequences of the original optimization problem. The whole point is being able to study the generalized optimization problem by itself, and then interpret that information in terms of minimizing sequences of the initial optimization problem. The main issue here is to find ways of characterizing the admissible set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004022.png" /> that may allow for an independent treatment of the generalized optimization problem. In particular, understanding how constraints in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004023.png" /> are determined by constraints in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/y/y120/y120040/y12004024.png" /> is a major challenge. See [[#References|[a3]]], [[#References|[a4]]].
+
Due to the fundamental property of the Young measure indicated above, the optimal behaviour for this new optimization problem can always be described with a single element in $\tilde { A }$, which in turn is generated by minimizing sequences of the original optimization problem. The whole point is being able to study the generalized optimization problem by itself, and then interpret that information in terms of minimizing sequences of the initial optimization problem. The main issue here is to find ways of characterizing the admissible set $\tilde { A }$ that may allow for an independent treatment of the generalized optimization problem. In particular, understanding how constraints in $\tilde { A }$ are determined by constraints in $\mathcal{A}$ is a major challenge. See [[#References|[a3]]], [[#References|[a4]]].
  
 
Young measures were originally introduced in the context of optimal control problems [[#References|[a9]]], [[#References|[a10]]] (cf. also [[Optimal control|Optimal control]]), and have also been used in some situations for problems in partial differential equations [[#References|[a2]]], [[#References|[a6]]], [[#References|[a7]]].
 
Young measures were originally introduced in the context of optimal control problems [[#References|[a9]]], [[#References|[a10]]] (cf. also [[Optimal control|Optimal control]]), and have also been used in some situations for problems in partial differential equations [[#References|[a2]]], [[#References|[a6]]], [[#References|[a7]]].
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E.J. Balder,  "Lectures on Young Measures"  ''Cah. de Ceremade'' , '''9512'''  (1995)  {{MR|1798830}} {{ZBL|}} </TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  R.J. DiPerna,  "Compensated compactness and general systems of conservation laws"  ''Trans. Amer. Math. Soc.'' , '''292'''  (1985)  pp. 383–420  {{MR|}} {{ZBL|0606.35052}} </TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  D. Kinderlehrer,  P. Pedregal,  "Characterizations of Young measures generated by gradients"  ''Arch. Rat. Mech. Anal.'' , '''115'''  (1991)  pp. 329–365  {{MR|1120852}} {{ZBL|0754.49020}} </TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  D. Kinderlehrer,  P. Pedregal,  "Gradient Young measures generated by sequences in Sobolev spaces"  ''J. Geom. Anal.'' , '''4'''  (1994)  pp. 59–90  {{MR|1274138}} {{MR|1268904}} {{ZBL|0808.46046}} {{ZBL|0828.46031}} </TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  P. Pedregal,  "Parametrized measures and variational principles" , Birkhäuser  (1997)  {{MR|1452107}} {{ZBL|0879.49017}} </TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  L. Tartar,  "Compensated compactness and applications to partal differential equations"  R. Knops (ed.) , ''Nonlinear Analysis and Mechanics: Heriot–Watt Symposium IV'' , ''Res. Notes Math.'' , '''39''' , Pitman  (1979)  pp. 136–212  {{MR|}} {{ZBL|}} </TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  L. Tartar,  "The compensated compactness method applied to systems of conservation laws"  J.M. Ball (ed.) , ''Systems of Nonlinear Partial Differential Equations'' , Reidel  (1983)  {{MR|0725524}} {{ZBL|0536.35003}} </TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top">  M. Valadier,  "Young measures" , ''Methods of Nonconvex Analysis'' , ''Lecture Notes Math.'' , '''1446''' , Springer  (1990)  pp. 152–188  {{MR|1114673}} {{MR|1079763}} {{ZBL|0742.49010}} {{ZBL|0738.28004}} </TD></TR><TR><TD valign="top">[a9]</TD> <TD valign="top">  L.C. Young,  "Generalized curves and the existence of an attained absolute minimum in the calculus of variations"  ''C.R. Soc. Sci. Lettres de Varsovie, Cl. III'' , '''30'''  (1937)  pp. 212–234  {{MR|}} {{ZBL|0019.21901}}  {{ZBL|63.1064.01}} </TD></TR><TR><TD valign="top">[a10]</TD> <TD valign="top">  L.C. Young,  "Generalized surfaces in the calculus of variations, I–II"  ''Ann. of Math.'' , '''43'''  (1942)  pp. 84–103; 530–544  {{MR|}} {{ZBL|}} </TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  E.J. Balder,  "Lectures on Young Measures"  ''Cah. de Ceremade'' , '''9512'''  (1995)  {{MR|1798830}} {{ZBL|}} </td></tr><tr><td valign="top">[a2]</td> <td valign="top">  R.J. DiPerna,  "Compensated compactness and general systems of conservation laws"  ''Trans. Amer. Math. Soc.'' , '''292'''  (1985)  pp. 383–420  {{MR|}} {{ZBL|0606.35052}} </td></tr><tr><td valign="top">[a3]</td> <td valign="top">  D. Kinderlehrer,  P. Pedregal,  "Characterizations of Young measures generated by gradients"  ''Arch. Rat. Mech. Anal.'' , '''115'''  (1991)  pp. 329–365  {{MR|1120852}} {{ZBL|0754.49020}} </td></tr><tr><td valign="top">[a4]</td> <td valign="top">  D. Kinderlehrer,  P. Pedregal,  "Gradient Young measures generated by sequences in Sobolev spaces"  ''J. Geom. Anal.'' , '''4'''  (1994)  pp. 59–90  {{MR|1274138}} {{MR|1268904}} {{ZBL|0808.46046}} {{ZBL|0828.46031}} </td></tr><tr><td valign="top">[a5]</td> <td valign="top">  P. Pedregal,  "Parametrized measures and variational principles" , Birkhäuser  (1997)  {{MR|1452107}} {{ZBL|0879.49017}} </td></tr><tr><td valign="top">[a6]</td> <td valign="top">  L. Tartar,  "Compensated compactness and applications to partal differential equations"  R. Knops (ed.) , ''Nonlinear Analysis and Mechanics: Heriot–Watt Symposium IV'' , ''Res. Notes Math.'' , '''39''' , Pitman  (1979)  pp. 136–212  {{MR|}} {{ZBL|}} </td></tr><tr><td valign="top">[a7]</td> <td valign="top">  L. Tartar,  "The compensated compactness method applied to systems of conservation laws"  J.M. Ball (ed.) , ''Systems of Nonlinear Partial Differential Equations'' , Reidel  (1983)  {{MR|0725524}} {{ZBL|0536.35003}} </td></tr><tr><td valign="top">[a8]</td> <td valign="top">  M. Valadier,  "Young measures" , ''Methods of Nonconvex Analysis'' , ''Lecture Notes Math.'' , '''1446''' , Springer  (1990)  pp. 152–188  {{MR|1114673}} {{MR|1079763}} {{ZBL|0742.49010}} {{ZBL|0738.28004}} </td></tr><tr><td valign="top">[a9]</td> <td valign="top">  L.C. Young,  "Generalized curves and the existence of an attained absolute minimum in the calculus of variations"  ''C.R. Soc. Sci. Lettres de Varsovie, Cl. III'' , '''30'''  (1937)  pp. 212–234  {{MR|}} {{ZBL|0019.21901}}  {{ZBL|63.1064.01}} </td></tr><tr><td valign="top">[a10]</td> <td valign="top">  L.C. Young,  "Generalized surfaces in the calculus of variations, I–II"  ''Ann. of Math.'' , '''43'''  (1942)  pp. 84–103; 530–544  {{MR|}} {{ZBL|}} </td></tr></table>

Revision as of 16:58, 1 July 2020

parametrized measure, relaxed control, stochastic kernel

A family of probability measures $\nu = \{ \nu _ { X } \} _ { X \in \Omega }$, one for each point $x$ in a domain $\Omega$ (cf. also Probability measure), associated to a sequence of functions $f _ { j } : \Omega \rightarrow {\bf R} ^ { d }$ with the fundamental property that

\begin{equation*} \operatorname { lim } _ { j \rightarrow \infty } \int _ { \Omega } \varphi ( x , f_j ( x ) ) d x = \end{equation*}

\begin{equation*} = \int _ { \Omega } \int _ { \mathbf{R} ^ { d } } \varphi ( x , \lambda ) d \nu _ { x } ( \lambda ) d x, \end{equation*}

for any Carathéodory function $\varphi$. The Young measure $\nu$ depends upon the sequence $\{ f_j \}$ but is independent of $\varphi$ ([a1], [a5], [a8]).

The main area where Young measures have recently been used is optimization theory. Optimization problems where a local, integral cost functional is to be minimized in a suitable class of functions often lack optimal solutions because of the presence of some non-convexity. In such cases, a single function is unable to reproduce the optimal behaviour, due precisely to this lack of optimal solutions, and one must resort to sequences (the so-called minimizing sequences) in order to comprehend the main features of optimality. From the horizon of the above-mentioned class of cost functionals, Young measures furnish a convenient way of dealing with optimal behaviour paying attention only to those features that make a behaviour optimal and disregarding accidental properties.

The way in which this process is accomplished can be described as follows. Let

\begin{equation*} I : \mathcal{A} \rightarrow \mathbf{R} \cup \{ + \infty \} \end{equation*}

be a local, integral cost functional defined on an admissible class of functions $\mathcal{A}$. Typically,

\begin{equation*} I ( u ) = \int _ { \Omega } F ( x , u ( x ) , \nabla u ( x ) , \ldots ) d x, \end{equation*}

where indicate higher-order derivatives. The optimization problem of interest is to comprehend how the infimum

\begin{equation*} \operatorname { inf } _ { u \in \mathcal{A} } I ( u ) \end{equation*}

is realized. One introduces a generalized optimization problem, intimately connected to the one above, by putting

\begin{equation*} \tilde{I} ( \nu ) = \operatorname { lim } _ { j \rightarrow \infty } I ( u _ { j } ) \end{equation*}

when $\nu$ is the Young measure associated to the sequence $\{ u _ { j } \} \subset \mathcal{A}$. If $\tilde { A }$ stands for the set of all such Young measures, one would like to understand the optimal behaviour for

\begin{equation*} \operatorname { inf } _ { \nu \in \tilde{A} } \tilde{I} ( \nu ). \end{equation*}

Due to the fundamental property of the Young measure indicated above, the optimal behaviour for this new optimization problem can always be described with a single element in $\tilde { A }$, which in turn is generated by minimizing sequences of the original optimization problem. The whole point is being able to study the generalized optimization problem by itself, and then interpret that information in terms of minimizing sequences of the initial optimization problem. The main issue here is to find ways of characterizing the admissible set $\tilde { A }$ that may allow for an independent treatment of the generalized optimization problem. In particular, understanding how constraints in $\tilde { A }$ are determined by constraints in $\mathcal{A}$ is a major challenge. See [a3], [a4].

Young measures were originally introduced in the context of optimal control problems [a9], [a10] (cf. also Optimal control), and have also been used in some situations for problems in partial differential equations [a2], [a6], [a7].

References

[a1] E.J. Balder, "Lectures on Young Measures" Cah. de Ceremade , 9512 (1995) MR1798830
[a2] R.J. DiPerna, "Compensated compactness and general systems of conservation laws" Trans. Amer. Math. Soc. , 292 (1985) pp. 383–420 Zbl 0606.35052
[a3] D. Kinderlehrer, P. Pedregal, "Characterizations of Young measures generated by gradients" Arch. Rat. Mech. Anal. , 115 (1991) pp. 329–365 MR1120852 Zbl 0754.49020
[a4] D. Kinderlehrer, P. Pedregal, "Gradient Young measures generated by sequences in Sobolev spaces" J. Geom. Anal. , 4 (1994) pp. 59–90 MR1274138 MR1268904 Zbl 0808.46046 Zbl 0828.46031
[a5] P. Pedregal, "Parametrized measures and variational principles" , Birkhäuser (1997) MR1452107 Zbl 0879.49017
[a6] L. Tartar, "Compensated compactness and applications to partal differential equations" R. Knops (ed.) , Nonlinear Analysis and Mechanics: Heriot–Watt Symposium IV , Res. Notes Math. , 39 , Pitman (1979) pp. 136–212
[a7] L. Tartar, "The compensated compactness method applied to systems of conservation laws" J.M. Ball (ed.) , Systems of Nonlinear Partial Differential Equations , Reidel (1983) MR0725524 Zbl 0536.35003
[a8] M. Valadier, "Young measures" , Methods of Nonconvex Analysis , Lecture Notes Math. , 1446 , Springer (1990) pp. 152–188 MR1114673 MR1079763 Zbl 0742.49010 Zbl 0738.28004
[a9] L.C. Young, "Generalized curves and the existence of an attained absolute minimum in the calculus of variations" C.R. Soc. Sci. Lettres de Varsovie, Cl. III , 30 (1937) pp. 212–234 Zbl 0019.21901 Zbl 63.1064.01
[a10] L.C. Young, "Generalized surfaces in the calculus of variations, I–II" Ann. of Math. , 43 (1942) pp. 84–103; 530–544
How to Cite This Entry:
Young measure. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Young_measure&oldid=50284
This article was adapted from an original article by Pablo Pedregal (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article