CFD Online Logo CFD Online URL
www.cfd-online.com
[Sponsors]
Home > Wiki > Introduction to turbulence/Stationarity and homogeneity

Introduction to turbulence/Stationarity and homogeneity

From CFD-Wiki

< Introduction to turbulence(Difference between revisions)
Jump to: navigation, search
(Bias and variability of time estimators)
 
(42 intermediate revisions not shown)
Line 1: Line 1:
 +
{{Introduction to turbulence menu}}
 +
== Processes statistically stationary in time ==
== Processes statistically stationary in time ==
Line 7: Line 9:
An alternative way of looking at ''stationarity'' is to note that ''the statistics of the process are independent of the origin in time''. It is obvious from the above, for example, that if the statistics of a process are time independent, then <math> \left\langle  u^{n} \left( t \right) \right\rangle = \left\langle u^{n} \left( t + T \right) \right\rangle </math> , etc., where <math> T </math> is some arbitrary translation of the origin in time. Less obvious, but equally true, is that the product <math> \left\langle u \left( t \right) u \left( t' \right) \right\rangle </math> depends only on time difference <math> t'-t </math> and not on <math> t </math> (or <math> t' </math> ) directly. This consequence of stationarity can be extended to any product moment. For example <math> \left\langle u \left( t \right) v \left( t' \right) \right\rangle </math> can depend only on the time difference <math> t'-t </math>. And <math> \left\langle u \left( t \right) v \left( t' \right) w \left( t'' \right)\right\rangle </math> can depend only on the two time differences <math> t'- t </math> and <math> t'' - t </math> (or <math> t'' - t' </math> ) and not <math> t </math> , <math> t' </math> or <math> t'' </math> directly.
An alternative way of looking at ''stationarity'' is to note that ''the statistics of the process are independent of the origin in time''. It is obvious from the above, for example, that if the statistics of a process are time independent, then <math> \left\langle  u^{n} \left( t \right) \right\rangle = \left\langle u^{n} \left( t + T \right) \right\rangle </math> , etc., where <math> T </math> is some arbitrary translation of the origin in time. Less obvious, but equally true, is that the product <math> \left\langle u \left( t \right) u \left( t' \right) \right\rangle </math> depends only on time difference <math> t'-t </math> and not on <math> t </math> (or <math> t' </math> ) directly. This consequence of stationarity can be extended to any product moment. For example <math> \left\langle u \left( t \right) v \left( t' \right) \right\rangle </math> can depend only on the time difference <math> t'-t </math>. And <math> \left\langle u \left( t \right) v \left( t' \right) w \left( t'' \right)\right\rangle </math> can depend only on the two time differences <math> t'- t </math> and <math> t'' - t </math> (or <math> t'' - t' </math> ) and not <math> t </math> , <math> t' </math> or <math> t'' </math> directly.
-
== The autocorrelation ==
+
== Autocorrelation ==
One of the most useful statistical moments in the study of stationary random processes (and turbulence, in particular) is the '''autocorrelation''' defined as the average of the product of the random variable evaluated at two times, i.e. <math> \left\langle u \left( t \right) u \left( t' \right)\right\rangle </math>. Since the process is assumed stationary, this product can depend only on the time difference <math> \tau = t' - t </math>. Therefore the autocorrelation can be written as:
One of the most useful statistical moments in the study of stationary random processes (and turbulence, in particular) is the '''autocorrelation''' defined as the average of the product of the random variable evaluated at two times, i.e. <math> \left\langle u \left( t \right) u \left( t' \right)\right\rangle </math>. Since the process is assumed stationary, this product can depend only on the time difference <math> \tau = t' - t </math>. Therefore the autocorrelation can be written as:
Line 47: Line 49:
</td><td width="5%">(4)</td></tr></table>
</td><td width="5%">(4)</td></tr></table>
-
== The autocorrelation coefficient ==
+
== Autocorrelation coefficient ==
It is convenient to define the ''autocorrelation coefficient'' as:
It is convenient to define the ''autocorrelation coefficient'' as:
Line 91: Line 93:
for all values of <math> \tau </math> .
for all values of <math> \tau </math> .
-
== The integral scale ==
+
== Integral scale ==
One of the most useful measures of the length of a time a process is correlated with itself is the integral scale defined by
One of the most useful measures of the length of a time a process is correlated with itself is the integral scale defined by
Line 103: Line 105:
It is easy to see why this works by looking at Figure 5.2. In effect we have replaced the area under the correlation coefficient by a rectangle of height unity and width <math> T_{int} </math> .
It is easy to see why this works by looking at Figure 5.2. In effect we have replaced the area under the correlation coefficient by a rectangle of height unity and width <math> T_{int} </math> .
-
== The temporal Taylor microscale ==
+
== Temporal Taylor microscale ==
The autocorrelation can be expanded about the origin in a MacClaurin series; i.e.,
The autocorrelation can be expanded about the origin in a MacClaurin series; i.e.,
Line 352: Line 354:
</math>
</math>
</td><td width="5%">(39)</td></tr></table>
</td><td width="5%">(39)</td></tr></table>
 +
 +
Therefore the estimator does, in fact, converge (in mean square) to the correct result as the averaging time, <math> T </math> increases relative to the integral scale, <math> T_{int} </math>.
 +
 +
There is a direct relationship between equation 39 and equation 52 in chapter The elements of statistical analysis ( section Bias and convergence of estimators) which gave the mean square variability for the ensemble estimate from a finite number of statistically independent realizations, <math> X_{N} </math>. Obviously the effective number of independent realizations for the finite time estimator is:
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
N_{eff} = \frac{2T_{int}}{T}
 +
</math>
 +
</td><td width="5%">(40)</td></tr></table>
 +
 +
so that the two expressions are equivalent. Thus, in effect, ''portions of the record separated by two integral scales behave as though they were statistically independent, at least as far as convergence of finite time estimators is concerned''.
 +
 +
Thus what is required for convergence is again, many ''independent'' pieces of information. This is illustrated in Figure 5.6. That the length of the recordn should be measured in terms of the integral scale should really be no surprise since it is a measure of the rate at which a process forgets its past.
 +
 +
'''Example'''
 +
 +
It is desired to mesure the mean velocity in a turbulent flow to within an rms error of 1% (i.e. <math> \epsilon = 0.01 </math> ). The expected fluctuation level of the signal is 25% and integral scale is estimated as 100 ms. What is the required averaging time?
 +
 +
From equation 39
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\begin{matrix}
 +
T  & = & \frac{2T_{int}}{\epsilon^{2}} \frac{var \left[ u \right]}{U^{2}} \\
 +
& = & 2 \times 0.1 \times (0.25)^{2} / (0.01)^{2} = 125 sec \\
 +
\end{matrix}
 +
</math>
 +
</td><td width="5%">(41)</td></tr></table>
 +
 +
Similar considerations apply to any other finite time estimator and equation 55 from chapter Statistical analysis can be applied directly as long as equation 40 is used for the number of independent samples.
 +
 +
It is common common experimental practice to not actually carry out an analog integration. Rather the signal is sampled at fixed intervals in time by digital means and the averages are computed as for an esemble with a finite number of realizations. Regardless of the manner in which the signal is processed, only a finite portion of a stationary time series can be analyzed and the preceding considerations always apply.
 +
 +
It is important to note that data sampled more rapidly than once every two integral scales do '''not''' contribute to the convergence of the estimator since they can not be considered independent. If <math> N </math> is the actual number of samples acquired and  <math> \Delta t </math> is the time between samples, then the effective number of independent realizations is
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
N_{eff} = \left\{         
 +
              \begin{array}{lll} 
 +
                  N \Delta t /T_{int} & if & \Delta t < 2T_{int} \\                 
 +
                  N & if &  \Delta t \geq  2T_{int} \\
 +
              \end{array}     
 +
    \right. 
 +
</math>
 +
</td><td width="5%">(42)</td></tr></table>
 +
 +
It should be clear that if you sample faster than  <math> \Delta t = 2T_{int} </math> you are processing unnecessary data which does not help your statistics converge.
 +
 +
You may wonder why one would ever take data faster than absolutely necessary, since it simply it simply fills up your computer memory with lots of statistically redundant data. When we talk about measuring spectra you will learn that for spectral measurements it is necessary to sample much faster to avoid spactral aliasing. Many wrongly infer that they must sample at these higher rates even when measuring just moments. Obviously this is not the case if you are not measuring spectra.
 +
 +
== Random fields of space and time ==
 +
 +
To this point only temporally varying random fields have been discussed. For turbulence however, random fields can be functions of both space and time. For example, the temperature <math> \theta </math> could be a random scalar function of time <math> t </math> and position <math> \stackrel{\rightarrow}{x} </math>, i.e.,
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\theta = \theta \left( \stackrel{\rightarrow}{x} , t  \right)
 +
</math>
 +
</td><td width="5%">(43)</td></tr></table>
 +
 +
The velocity is another example of a random vector function of position and time, i.e.,
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\stackrel{\rightarrow}{u} = \stackrel{\rightarrow}{u} \left( \stackrel{\rightarrow}{x},t \right)
 +
</math>
 +
</td><td width="5%">(44)</td></tr></table>
 +
 +
or in tensor notation,
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
u_{i} = u_{i} \left( \stackrel{\rightarrow}{x},t \right)
 +
</math>
 +
</td><td width="5%">(45)</td></tr></table>
 +
 +
In the general case, the ensemble averages of these quantities are functions of both positon and time; i.e.,
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\left\langle \theta \left( \stackrel{\rightarrow}{x},t \right) \right\rangle \equiv \Theta \left( \stackrel{\rightarrow}{x},t \right)
 +
</math>
 +
</td><td width="5%">(46)</td></tr></table>
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\left\langle u_{i} \left( \stackrel{\rightarrow}{x},t \right) \right\rangle \equiv U_{i} \left( \stackrel{\rightarrow}{x},t \right)
 +
</math>
 +
</td><td width="5%">(47)</td></tr></table>
 +
 +
If only ''stationary'' random processes are considered, then the averages do not depend on time and are functions of <math> \stackrel{\rightarrow}{x} </math> only; i.e.,
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\left\langle \theta \left( \stackrel{\rightarrow}{x},t \right) \right\rangle \equiv \Theta \left( \stackrel{\rightarrow}{x} \right)
 +
</math>
 +
</td><td width="5%">(48)</td></tr></table>
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\left\langle u_{i} \left( \stackrel{\rightarrow}{x},t \right) \right\rangle \equiv U_{i} \left( \stackrel{\rightarrow}{x}\right)
 +
</math>
 +
</td><td width="5%">(49)</td></tr></table>
 +
 +
Now the averages may not be position dependent either. For example, if the averages are ''independent of the origin in position'', then the field is said to be '''homogeneous'''. '''Homogenity''' (the noun corresponding to the adjective homogeneous) is exactly analogous to stationarity except that position is now the variable, and not time.
 +
 +
It is, of course, possible (at least in concept) to have homogeneous fields which are either stationary or non stationary. Since position, unlike time, is a vector quantity it is also possible to have only partial homogeneity. For example, a field can be homogeneous in the <math> x_{1}- </math> and  <math> x_{3}- </math> directions, but not in the <math> x_{2}- </math> direction so that <math>  U_{i}=U_{i}(X_{2}) </math> only. In fact, it appears to be dynamically impossible to have flows which are honogeneous in all variables and stationary as well, but the concept is useful, nonetheless.
 +
 +
Homogeneity will be seen to have powerful consequences for the equations govering the averaged motion, since the spatial derivative of any averaged quantity must be identically zero. Thus even homogeneity in only one direction can considerably simplify the problem. For example, in the Reynolds stress transport equation, the entire turbulence transport is exactly zero if the field is homogeneous.
 +
 +
== Multi-point statistics in homogeneous field ==
 +
 +
The concept of homogeneity can also be extended to multi-point statistics. Consider for example, the correlation between the velocity at one point and that at another as illustrated in Figure 5.7. If the time dependence is suppressed and the field is assumed statistically ''homogeneous'', this correlation is a function only of the separation of the two points, i.e.,
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\left\langle u_{i} \left( \stackrel{\rightarrow}{x} , t \right) u_{j} \left( \stackrel{\rightarrow}{x'} , t \right) \right\rangle \equiv B_{i,j} \left( \stackrel{\rightarrow}{r} \right)
 +
</math>
 +
</td><td width="5%">(50)</td></tr></table>
 +
 +
where <math> \stackrel{\rightarrow}{r} </math> is the separation vector defined by
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\stackrel{\rightarrow}{r} = \stackrel{\rightarrow}{x'} - \stackrel{\rightarrow}{x}
 +
</math>
 +
</td><td width="5%">(51)</td></tr></table>
 +
 +
or
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
r_{i} = x'_{i} - x_{i}
 +
</math>
 +
</td><td width="5%">(52)</td></tr></table>
 +
 +
Note that the convention we shall follow for vector quantities is that the first subscript on <math> B_{i,j} </math> is the component of velocity at the first position, <math> \stackrel{\rightarrow}{x} </math> , and the second subscript is the component of velocity at the second, <math> \stackrel{\rightarrow}{x'} </math>. For scalar quantities we shall simply put a simbol for the quantity to hold the place. For example, we would write the two-point temperature correlation in a homogeneous field by:
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\left\langle \theta \left( \stackrel{\rightarrow}{x},t \right) \theta \left( \stackrel{\rightarrow}{x'},t \right) \right\rangle \equiv B_{\theta , \theta} \left( \stackrel{\rightarrow}{r} \right)
 +
</math>
 +
</td><td width="5%">(53)</td></tr></table>
 +
 +
A mixed vector/scalar correlation like the two-point temperature velocity correlation would be written as:
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\left\langle u_{i} \left(  \stackrel{\rightarrow}{x} , t \right) \theta \left( \stackrel{\rightarrow}{x'},t \right) \right\rangle \equiv B_{i,\theta } \left( \stackrel{\rightarrow}{r} \right)
 +
</math>
 +
</td><td width="5%">(54)</td></tr></table>
 +
 +
On the other hand, if we meant for the temperature to be evaluated at <math> \stackrel{\rightarrow}{x} </math> and the velocity at <math> \stackrel{\rightarrow}{x'} </math> we would have to write:
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\left\langle \theta \left( \stackrel{\rightarrow}{x},t \right) u_{i} \left( \stackrel{\rightarrow}{x'},t \right) \right\rangle \equiv B_{ \theta, i } \left( \stackrel{\rightarrow}{r} \right)
 +
</math>
 +
</td><td width="5%">(55)</td></tr></table>
 +
 +
Now most books don't bother with the subscript notation, and simply give each new correlation a new symbol. At first this seems much simpler; and it is as long as you are only dealing with one or two different correlations. But introduce a few more, then read about a half-dozen pages, and you will find you completely forget what they are or how they were put together. It is usually very important to know exactly what you are talking about, so we will use this comma system to help us remember.
 +
 +
It is easy to see that the consideration of vector quantities raises special considerations. For example, the correlation between a scalar function of position at two points is symmetrical in <math> \stackrel{\rightarrow}{r} </math> , i.e.,
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
B_{\theta,\theta} \left( \stackrel{\rightarrow}{r} \right) = B_{\theta,\theta} \left( - \stackrel{\rightarrow}{r} \right)
 +
</math>
 +
</td><td width="5%">(56)</td></tr></table>
 +
 +
This is easy to show from the definition of <math> B_{\theta,\theta} </math> and the fact that the field is homogeneous. Simply shift each of the position vectors by the same amount <math> - \stackrel{\rightarrow}{r} </math> as shown in Figure 5.8 to obtain:
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\begin{matrix}
 +
B_{\theta,\theta}\left( \stackrel{\rightarrow}{r},t \right) & \equiv & \left\langle \theta\left( \stackrel{\rightarrow}{x}, t \right) \theta\left( \stackrel{\rightarrow}{x'}, t \right) \right\rangle \\
 +
& = & \left\langle \theta \left( \stackrel{\rightarrow}{x} - \stackrel{\rightarrow}{r} , t \right) \theta \left( \stackrel{\rightarrow}{x'} - \stackrel{\rightarrow}{r} , t \right) \right\rangle \\
 +
& = & B_{\theta,\theta}\left( - \stackrel{\rightarrow}{r},t \right) \\
 +
\end{matrix}
 +
</math>
 +
</td><td width="5%">(57)</td></tr></table>
 +
 +
since <math> \stackrel{\rightarrow}{x'} - \stackrel{\rightarrow}{r} = \stackrel{\rightarrow}{x} </math> ; i.e., the points are reversed and the separation vector is pointing the opposite way.
 +
 +
Such is not the case, in general, for ''vector'' functions of position. For example, see if you can prove to yourself the following:
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
B_{\theta,i} \left( \stackrel{\rightarrow}{r} \right) = B_{i,\theta} \left( - \stackrel{\rightarrow}{r} \right)
 +
</math>
 +
</td><td width="5%">(58)</td></tr></table>
 +
 +
and
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
B_{i,j} \left( \stackrel{\rightarrow}{r} \right) = B_{j,i} \left( - \stackrel{\rightarrow}{r} \right)
 +
</math>
 +
</td><td width="5%">(59)</td></tr></table>
 +
 +
Clearly the latter is symmetrical in the variable <math> \stackrel{\rightarrow}{r} </math> only when <math> i = j </math> .
 +
 +
These properties of the two-point correlation function will be seen to play an important role in determining the interrelations among the different two-point statistical quantities. They will be especially important when we talk about spectral quantities.
 +
 +
== Spatial integral and Taylor microscales ==
 +
 +
Just as for a stationary random process, correlations between spatially varying, but ''statistically homogeneous'', random quantities ultimately go to zero;, i.e., they become uncorrelated as their locations become widely separated. Because position (o relative position) is a vector quantity, however, the correlation the carrelation may die off at different rates in different directions. Thus direction must be an important part of the definitions of the integral scales and microscales.
 +
 +
Consider for example the one-dimensional spatial correlation which is obtained by measuring the correlation between the temperature at two points along a line in the x-direction, say,
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
B^{(1)}_{\theta,\theta} \left( r \right) \equiv \left\langle \theta \left( x_{1} + r , x_{2} , x_{3} , t  \right) \theta \left( x_{1} , x_{2} , x_{3} , t  \right) \right\rangle
 +
</math>
 +
</td><td width="5%">(60)</td></tr></table>
 +
 +
The superscript "(1)" denotes "the coordinate direction in which the separation occurs". This distinguishes it from the vector separation of <math> B_{\theta,\theta} </math> above. Also, note that the correlation at zero separationis just the variance; i.e.,
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
B^{(1)}_{\theta,\theta} \left( 0 \right) = \left\langle \theta^{2} \right\rangle
 +
</math>
 +
</td><td width="5%">(61)</td></tr></table>
 +
 +
The integral scale in the <math> x </math>-direction can be defined as:
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
L^{(1)}_{\theta} \equiv \frac{1}{ \left\langle \theta^{2} \right\rangle} \int^{\infty}_{0} \left\langle \theta \left( x + r, y,z,t \right) \theta \left( x,y,z,t \right) \right\rangle dr
 +
</math>
 +
</td><td width="5%">(62)</td></tr></table>
 +
 +
It is clear that there are at least two more integral scales which could be defined by considering separations in the y and z directions. Thus
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
L^{(2)}_{\theta} \equiv \frac{1}{ \left\langle \theta^{2} \right\rangle} \int^{\infty}_{0} \left\langle \theta \left( x,y + r,z,t \right) \theta \left( x,y,z,t \right) \right\rangle dr
 +
</math>
 +
</td><td width="5%">(63)</td></tr></table>
 +
 +
and
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
L^{(3)}_{\theta} \equiv \frac{1}{ \left\langle \theta^{2} \right\rangle} \int^{\infty}_{0} \left\langle \theta \left( x,y,z + r,t \right) \theta \left( x,y,z,t \right) \right\rangle dr 
 +
</math>
 +
</td><td width="5%">(64)</td></tr></table>
 +
 +
In fact, an integral scale could be defined for ''any'' direction simply by choosing the components of the separation vector <math> \stackrel{\rightarrow}{r} </math>. This situation is even more complicated when correlations of vector quantities are considered. For example, consider the correlation of the velocity vectors at two points, <math> B_{i,j} \left( \stackrel{\rightarrow}{r} \right) </math>. Clearly  <math> B_{i,j} \left( \stackrel{\rightarrow}{r} \right) </math> is not a single correlation, but rather nine separate correlations: <math> B_{1,1} \left( \stackrel{\rightarrow}{r} \right) </math> , <math> B_{1,2} \left( \stackrel{\rightarrow}{r} \right) </math> , <math> B_{1,3} \left( \stackrel{\rightarrow}{r} \right) </math> , <math> B_{2,1} \left( \stackrel{\rightarrow}{r} \right) </math> , <math> B_{2,2} \left( \stackrel{\rightarrow}{r} \right) </math> , etc. For each of these an integral scale can be defined once a direction for the separation vector is chosen. For example, the integral scales associated with <math> B_{1,1} </math> for the principal directions are
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
L^{(1)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( r,0,0 \right) dr
 +
</math>
 +
</td><td width="5%">(65)</td></tr></table>
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
L^{(2)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( 0,r,0 \right) dr
 +
</math>
 +
</td><td width="5%">(66)</td></tr></table>
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
L^{(3)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( 0,0,r \right) dr
 +
</math>
 +
</td><td width="5%">(67)</td></tr></table>
 +
 +
Similar integral scales can be defined for the other componentsof the correlation tensor. Two of particular importance in the development of the turbulence theory are:
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
L^{(2)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( 0,r,0 \right) dr
 +
</math>
 +
</td><td width="5%">(68)</td></tr></table>
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
L^{(1)}_{2,2} \equiv \frac{1}{\left\langle u^{2}_{2} \right\rangle} \int^{\infty}_{0} B_{2,2} \left( r,0,0 \right) dr
 +
</math>
 +
</td><td width="5%">(69)</td></tr></table>
 +
 +
In general, each of these integral scales will be different, unless restrictions beyond simple homogeneity are placed on the process (e.g., like ''isotropy'' discussed below). Thus, it is important to specify precisely which integral scale is being referred to; i.e., which components of the vector quantities are being used and in which direction the integration is being performed.
 +
 +
Similar considerations apply to the Taylor microscales, regardless of whether they are being determined from the correlations at small separations, or from the mean square fluctuating gradients. The two most commonly used Taylor microscales are often referred to as <math> \lambda_{f} </math> and <math> \lambda_{g} </math> and are defined by
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\lambda^{2}_{f} \equiv 2 \frac{ \left\langle u^{2}_{1} \right\rangle }{ \left\langle \left[ \partial u_{1} / \partial x_{1}  \right]^{2} \right\rangle }
 +
</math>
 +
</td><td width="5%">(70)</td></tr></table>
 +
 +
and
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\lambda^{2}_{g} \equiv 2 \frac{ \left\langle u^{2}_{1} \right\rangle }{ \left\langle \left[ \partial u_{1} / \partial x_{2}  \right]^{2} \right\rangle }
 +
</math>
 +
</td><td width="5%">(71)</td></tr></table>
 +
 +
The subscripts f and g refer to the autocorrelation coefficients defined by:
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
f \left( r \right) \equiv \frac{\left\langle u_{1} \left( x_{1} + r,x_{2},x_{3} \right) u_{1} \left( x_{1},x_{2},x_{3} \right) \right\rangle}{ \left\langle u^{2}_{1} \right\rangle } = \frac{B_{1,1} \left( r,0,0 \right)}{ B_{1,1} \left( 0,0,0 \right) }
 +
</math>
 +
</td><td width="5%">(72)</td></tr></table>
 +
 +
and
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
g \left( r \right) \equiv \frac{\left\langle u_{1} \left( x_{1},x_{2}+r,x_{3} \right) u_{1} \left( x_{1},x_{2},x_{3} \right) \right\rangle}{ \left\langle u^{2}_{1} \right\rangle } = \frac{B_{1,1} \left( 0,r,0 \right)}{ B_{1,1} \left( 0,0,0 \right) }
 +
</math>
 +
</td><td width="5%">(73)</td></tr></table>
 +
 +
It is straightforward to show from the definitions that <math> \lambda_{f} </math> and <math> \lambda_{g} </math> are related to the curvature of the <math> f </math> and <math> g </math> correlation functions at <math> r=0 </math>. Specifically,
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\lambda^{2}_{f}= \frac{2}{d^{2} f / dr^{2} |_{r=0}  }
 +
</math>
 +
</td><td width="5%">(74)</td></tr></table>
 +
 +
and
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
\lambda^{2}_{g}= \frac{2}{d^{2} g / dr^{2} |_{r=0}  } 
 +
</math>
 +
</td><td width="5%">(75)</td></tr></table>
 +
 +
Since both <math> f </math> and <math> g </math> are symmetrical functions of <math> r </math>,  <math> df/dr </math> and <math> dg/dr </math> must be zero at <math> r=0 </math>. It follows immediately that the leading <math> r </math>-dependent term in the expansions about the origin of both autocorrelations are of parabolic form; i.e.,
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
f \left( r \right) = 1 - \frac{r^{2}}{\lambda^{2}_{f}} + \cdots
 +
</math>
 +
</td><td width="5%">(76)</td></tr></table>
 +
 +
and
 +
 +
<table width="70%"><tr><td>
 +
:<math>
 +
g \left( r \right) = 1 - \frac{r^{2}}{\lambda^{2}_{g}} + \cdots
 +
</math>
 +
</td><td width="5%">(77)</td></tr></table>
 +
 +
This is illustrated in Figure 5.9 which shows that the Taylor microscales are the intersection with the <math> r </math>-axis of a parabola fitted to the appropriate correlation function at the origin. Fitting a parabola is a common way to determine the Taylor microscale, but to do so you must make sure you resolve accurately to scales much smaller than it (typically an order of magnitude smaller is required). Otherwise you are simply determining the spatial filtering of your probe or numerical algorithm.
 +
 +
 +
{{Turbulence credit wkgeorge}}
 +
 +
{{Chapter navigation|Turbulence kinetic energy|Homogeneous turbulence}}

Latest revision as of 09:19, 25 February 2008

Introduction to turbulence
Nature of turbulence
Statistical analysis
Reynolds averaged equation
Turbulence kinetic energy
Stationarity and homogeneity
Homogeneous turbulence
Free turbulent shear flows
Wall bounded turbulent flows
Study questions

... template not finished yet!

Contents

Processes statistically stationary in time

Many random processes have the characteristic that their statistical properties do not appear to depend directly on time, even though the random variables themselves are time-dependent. For example, consider the signals shown in Figures 2.2 and 2.5

When the statistical properties of a random process are independent of time, the random process is said to be stationary. For such a process all the moments are time-independent, e.g.,  \left\langle \tilde{ u \left( t \right)} \right\rangle = U , etc. In fact, the probability density itself is time-independent, as should be obvious from the fact that the moments are time independent.

An alternative way of looking at stationarity is to note that the statistics of the process are independent of the origin in time. It is obvious from the above, for example, that if the statistics of a process are time independent, then  \left\langle  u^{n} \left( t \right) \right\rangle = \left\langle u^{n} \left( t + T \right) \right\rangle , etc., where  T is some arbitrary translation of the origin in time. Less obvious, but equally true, is that the product  \left\langle u \left( t \right) u \left( t' \right) \right\rangle depends only on time difference  t'-t and not on  t (or  t' ) directly. This consequence of stationarity can be extended to any product moment. For example  \left\langle u \left( t \right) v \left( t' \right) \right\rangle can depend only on the time difference  t'-t . And  \left\langle u \left( t \right) v \left( t' \right) w \left( t'' \right)\right\rangle can depend only on the two time differences  t'- t and  t'' - t (or  t'' - t' ) and not  t ,  t' or  t'' directly.

Autocorrelation

One of the most useful statistical moments in the study of stationary random processes (and turbulence, in particular) is the autocorrelation defined as the average of the product of the random variable evaluated at two times, i.e.  \left\langle u \left( t \right) u \left( t' \right)\right\rangle . Since the process is assumed stationary, this product can depend only on the time difference  \tau = t' - t . Therefore the autocorrelation can be written as:

 
C \left( \tau \right) \equiv \left\langle u \left( t \right) u \left( t + \tau \right)  \right\rangle
(1)

The importance of the autocorrelation lies in the fact that it indicates the "memory" of the process; that is, the time over which is correlated with itself. Contrast the two autocorrelation of deterministic sine wave is simply a cosine as can be easily proven. Note that there is no time beyond which it can be guaranteed to be arbitrarily small since it always "remembers" when it began, and thus always remains correlated with itself. By contrast, a stationary random process like the one illustrated in the figure will eventually lose all correlation and go to zero. In other words it has a "finite memory" and "forgets" how it was. Note that one must be careful to make sure that a correlation really both goes to zero and stays down before drawing conclusions, since even the sine wave was zero at some points. Stationary random process always have two-time correlation functions which eventually go to zero and stay there.

Example 1.

Consider the motion of an automobile responding to the movement of the wheels over a rough surface. In the usual case where the road roughness is randomly distributed, the motion of the car will be a weighted history of the road's roughness with the most recent bumps having the most influence and with distant bumps eventually forgotten. On the other hand, if the car is travelling down a railroad track, the periodic crossing of the railroad ties represents a determenistic input an the motion will remain correlated with itself indefinitely, a very bad thing if the tie crossing rate corresponds to a natural resonance of the suspension system of the vehicle.

Since a random process can never be more than perfectly correlated, it can never achieve a correlation greater than is value at the origin. Thus

 
\left| C \left( \tau \right) \right| \leq C\left( 0 \right)
(2)

An important consequence of stationarity is that the autocorrelation is symmetric in the time difference  \tau = t' - t . To see this simply shift the origin in time backwards by an amount  \tau  and note that independence of origin implies:

 
\left\langle u \left( t \right) u \left( t + \tau \right) \right\rangle  = \left\langle u \left( t - \tau \right)  u \left( t \right) \right\rangle
(3)

Since the right hand side is simply  C \left( - \tau \right)   , it follows immediately that:

 
C \left( \tau \right) = C \left( - \tau \right)
(4)

Autocorrelation coefficient

It is convenient to define the autocorrelation coefficient as:


\rho \left( \tau \right) \equiv \frac{ C \left( \tau \right)}{ C \left( 0 \right)} = \frac{\left\langle u \left( t \right) u \left( t + \tau \right) \right\rangle}{ \left\langle  u'^{2} \right\rangle }
(5)

where


\left\langle u^{2} \right\rangle = \left\langle u \left( t \right) u \left( t \right) \right\rangle = C \left( 0 \right) = var \left[ u \right]
(6)

Since the autocorrelation is symmetric, so is its coefficient, i.e.,


\rho \left( \tau \right) = \rho  \left( - \tau \right)
(7)

It is also obvious from the fact that the autocorrelation is maximal at the origin that the autocorrelation coefficient must also be maximal there. In fact from the definition it follows that


\rho \left( 0 \right) = 1
(8)

and


\rho \left( \tau \right) \leq 1
(9)

for all values of  \tau .

Integral scale

One of the most useful measures of the length of a time a process is correlated with itself is the integral scale defined by


T_{int} \equiv \int^{\infty}_{0} \rho \left( \tau \right) d \tau
(10)

It is easy to see why this works by looking at Figure 5.2. In effect we have replaced the area under the correlation coefficient by a rectangle of height unity and width  T_{int} .

Temporal Taylor microscale

The autocorrelation can be expanded about the origin in a MacClaurin series; i.e.,


C \left( \tau \right) = C \left( 0 \right) + \tau \frac{ d C }{ d t }|_{\tau = 0} + \frac{1}{2} \tau^{2} \frac{d^{2} C}{d t^{2} }|_{\tau = 0} + \frac{1}{3!} \tau^{3} \frac{d^{3} C}{d t^{3} }|_{\tau = 0}
(11)

But we know the aoutocorrelation is symmetric in  \tau , hence the odd terms in  \tau must be identically to zero (i.e.,  dC / dt |_{\tau = 0} = 0 ,  d^{3}C / dt^{3} |_{\tau = 0} = 0  , etc.). Therefore the expansion of the autocorrelation near the origin reduces to:


C \left( \tau \right) = C \left( 0 \right) + \frac{1}{2} \tau^{2} \frac{d^{2} C}{d t^{2} }|_{\tau = 0} + \cdots
(12)

Similary, the autocorrelation coefficient near the origin can be expanded as:


\rho \left( \tau \right) = 1 + \frac{1}{2}\frac{d^{2}\rho}{d t^{2}}|_{\tau = 0} \tau^{2}+ \cdots
(13)

where we have used the fact that  \rho \left( 0 \right) = 1 . If we define  ' = d / dt  we can write this compactly as:


\rho \left( \tau \right) = 1 + \frac{1}{2} \rho '' \left( 0 \right) \tau^{2} + \cdots
(14)

Since  \rho \left( \tau \right) has its maximum at the origin, obviously  \rho'' \left( 0 \right) must be negative.

We can use the correlation and its second derivative at the origin to define a special time scale,  \lambda_{\tau} (called the Taylor microscale) by:


\lambda^{2}_{\tau} \equiv - \frac{2}{\rho'' \left( 0 \right)}
(15)

Using this in equation 14 yields the expansion for the correlation coefficient near the origin as:


\rho \left( \tau \right) = 1 - \frac{\tau^{2}}{\lambda^{2}_{\tau}} + \cdots
(16)

Thus very near the origin the correlation coefficient (and the autocorrelation as well) simply rolls off parabolically; i.e.,


\rho \left( \tau \right) \approx 1 - \frac{\tau^{2}}{\lambda^{2}_{\tau}}
(17)

This parabolic curve is shown in Figure 3 as the osculating (or 'kissing') parabola which approaches zero exactly as the autocorrelation coefficient does. The intercept of this osculating parabola with the  \tau -axis is the Taylor microscale,  \lambda_{\tau} .

The Taylor microscale is significant for a number of reasons. First, for many random processes (e.g., Gaussian), the Taylor microscale can be proven to be the average distance between zero-crossing of a random variable in time. This is approximately true for turbulence as well. Thus one can quickly estimate the Taylor microscale by simply observing the zero-crossings using an oscilloscope trace.

The Taylor microscale also has a special relationship to the mean square time derivative of the signal,  \left\langle  \left[ d u / d t \right]^{2} \right\rangle . This is easiest to derive if we consider two stationary random signals at two different times say  u = u \left( t \right) and  u' = u' \left( t' \right) . The derivative of the first signal is  d u / d t and the second  d u' / d t' . Now lets multiply these together and rewrite them as:


\frac{du'}{dt'} \frac{du}{dt} = \frac{d^{2}}{dtdt'} u \left( t \right) u' \left( t' \right)
(18)

where the right-hand side follows from our assumption that  u is not a function of  t' nor  u' a function of  t .

Now if we average and interchenge the operations of differentiation and averaging we obtain:


\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle = \frac{d^{2}}{dtdt'} \left\langle u \left( t \right) u' \left( t' \right) \right\rangle
(19)

Here comes the first trick: we simply take  u' to be exactly  u but evaluated at time  t' . So  u \left( t \right) u' \left( t' \right) simply becomes  u \left( t \right) u  \left( t' \right) and its average is just the autocorrelation,  C \left( \tau \right) . Thus we are left with:


\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle =  \frac{d^{2}}{dtdt'} C \left( t' - t \right)
(20)

Now we simply need to use the chain-rule. We have already defined  \tau = t' - t . Let's also define  \xi = t' + t and transform the derivatives involving  t and  t' to derivatives involving  \tau and  \xi . The result is:


\frac{d^{2}}{dtdt'} = \frac{d^{2}}{d \xi^{2}} - \frac{d^{2}}{d \tau^{2}}
(21)

So equation 20 becomes


\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle = \frac{d^{2}}{d \xi^{2}}C \left( \tau \right) - \frac{d^{2}}{d \tau^{2}} C \left( \tau \right)
(22)

But since  C is a function only of  \tau , the derivative of it with respect to  \xi is identically zero. Thus we are left with:


\left\langle \frac{du'}{dt'} \frac{du}{dt} \right\rangle = - \frac{d^{2}}{d \tau^{2}} C \left( \tau \right)
(23)

And finally we need the second trick. Let's evaluate both sides at  t = t' (or   \tau = 0 ) to obtain the mean square derivative as:


\left\langle \left( \frac{du}{dt} \right)^{2} \right\rangle = - \frac{d^{2}}{d \tau^{2}} C \left( \tau \right)|_{ \tau = 0}
(24)

But from our definition of the Taylor microscale and the facts that  C \left( 0 \right) = \left\langle u^{2} \right\rangle and  C \left( \tau \right) = \left\langle u^{2} \right\rangle \rho \left( \tau \right) , this is exactly the same as:


\left\langle \left( \frac{du}{dt} \right)^{2} \right\rangle = 2 \frac{ \left\langle u^{2} \right\rangle}{\lambda^{2}_{\tau}}
(25)

This amasingly simple result is very important in the study of turbulence, especially after we extend it to spatial derivatives.

Time averages of stationary processes

It is common practice in many scientific disciplines to define a time average by integrating the random variable over a fixed time interval, i.e. ,


U_{T} \equiv \frac{1}{T} \int^{T_{2}}_{T_{1}} u \left( t \right) dt
(26)

For the stationary random processes we are considering here, we can define  T_{1} to be the origin in time and simply write:


U_{T} \equiv \frac{1}{T} \int^{T}_{0} u \left( t \right) dt
(27)

where  T = T_{2} - T_{1} is the integration time.

Figure 5.4. shows a portion of a stationary random signal over which such an integration might be performed. The ime integral of  u \left( t \right) over the integral  \left( O, T \right) corresponds to the shaded area under the curve. Now since  u \left( t \right) is random and since it formsthe upper boundary of the shadd area, it is clear that the time average,  U_{T}  is a lot like the estimator for the mean based on a finite number of independent realization,  X_{N} we encountered earlier in section Estimation from a finite number of realizations (see Elements of statistical analysis)

It will be shown in the analysis presented below that if the signal is stationary, the time average defined by equation 27 is an unbiased estimator of the true average  U . Moreover, the estimator converges to  U as the time becomes infinite; i.e., for stationary random processes


U = \lim_{T \rightarrow \infty} \frac{1}{T} \int^{T}_{0} u \left( t \right) dt
(28)

Thus the time and ensemble averages are equivalent in the limit as  T \rightarrow \infty , but only for a stationary random process.

Bias and variability of time estimators

It is easy to show that the estimator,  U_{T} , is unbiased by taking its ensemble average; i.e.,


\left\langle U_{T} \right\rangle = \left\langle \frac{1}{T}  \int^{T}_{0} u \left( t \right) dt \right\rangle = \frac{1}{T} \int^{T}_{0} \left\langle u \left( t \right) \right\rangle dt
(29)

Since the process has been assumed stationary,   \left\langle u \left( t \right) \right\rangle is independent of time. It follows that:


\left\langle U_{T} \right\rangle = \frac{1}{T} \left\langle u \left( t \right) \right\rangle T = U
(30)

To see whether the etimate improves as  T increases, the variability of  U_{T} must be examined, exactly as we did for  X_{N} earlier in section Bias and convergence of estimators (see chapter The elements of statistical analysis). To do this we need the variance of  U_{T} given by:

 
\begin{matrix}
var \left[ U_{T} \right] & = &  \left\langle \left[ U_{T} - \left\langle U_{T}  \right\rangle  \right]^{2} \right\rangle = \left\langle \left[ U_{T} - U \right]^{2} \right\rangle \\
& = &  \frac{1}{T^{2}} \left\langle \left\{ \int^{T}_{0} \left[ u \left( t \right) - U \right] \right\}^{2} \right\rangle \\
& = & \frac{1}{T^{2}} \left\langle \int^{T}_{0} \int^{T}_{0} \left[ u \left( t \right) - U \right] \left[ u \left( t' \right) - U \right] dtdt' \right\rangle \\
& = & \frac{1}{T^{2}} \int^{T}_{0} \int^{T}_{0} \left\langle u'\left( t \right) u'\left( t' \right)    \right\rangle dtdt' \\
\end{matrix}
(31)

But since the process is assumed stationary  \left\langle u' \left( t \right) u' \left( t' \right)  \right\rangle = C \left( t' - t \right) where  C \left( t' - t \right) = \left\langle u^{2} \right\rangle \rho \left( t'-t \right) is the correlation coefficient. Therefore the integral can be rewritten as:

 
\begin{matrix}
var \left[ U_{T} \right] & = & \frac{1}{T^{2}} \int^{T}_{0} \int^{T}_{0} C \left( t' - t \right) dtdt' \\
& = & \frac{ \left\langle u^{2} \right\rangle }{ T^{2} } \int^{T}_{0} \int^{T}_{0} \rho \left( t' - t \right) dtdt' \\
\end{matrix}
(33)

Now we need to apply some fancy calculus. If new variables  \tau= t'-t  and  \xi= t'+t are defined, the double integral can be transformed to (see Figure 5.5):

 
var \left[ U_{T} \right] = \frac{var \left[ u \right]}{2 T^{2}} \left[ \int^{T}_{0} d \tau \int^{T-\tau}_{\tau} d \xi \rho \left( \tau \right) + \int^{0}_{-T} d \tau \int^{T+\tau}_{-\tau} d \xi \rho \left( \tau \right) \right]
(35)

where the factor of  1/2 arises from the Jacobian of the transformation. The integrals over  d \xi can be evaluated directly to yield:

 
var \left[ U_{T} \right] = \frac{var \left[ u \right]}{2 T^{2}} \left\{ \int^{T}_{0} \rho \left( \tau \right) \left[ T - \tau \right] d \tau  + \int^{0}_{-T} \rho \left( \tau \right) \left[ T + \tau \right] \right\}
(36)

By noting that the autocorrelation is symmetric, the second integral can be transformed and added to the first to yield at last the result we seek as:



var \left[ U_{T} \right] = \frac{var \left[ u \right]}{T} \int^{T}_{-T} \rho \left( \tau \right) \left[ 1 - \frac{ \left| \tau \right| }{T} \right] d \tau
(37)

Now if our averaging time,  T , is chosen so large that  \left| \tau \right| / T << 1 over the range for which  \rho \left( \tau \right) is non-zero, the integral reduces:

 
\begin{matrix}
var \left[ U_{T} \right] & \approx & \frac{2 var \left[ u \right]}{T} \int^{T}_{0} \rho \left( \tau \right) d \tau \\
& = & \frac{2 T_{int}}{T} var \left[ u \right] \\
\end{matrix}
(38)

where  T_{int} is the integral scale defined by equation 10. Thus the variability of our estimator is given by:

 
\epsilon^{2}_{U_{T}} = \frac{2T_{int}}{T}
(39)

Therefore the estimator does, in fact, converge (in mean square) to the correct result as the averaging time,  T increases relative to the integral scale,  T_{int} .

There is a direct relationship between equation 39 and equation 52 in chapter The elements of statistical analysis ( section Bias and convergence of estimators) which gave the mean square variability for the ensemble estimate from a finite number of statistically independent realizations,  X_{N} . Obviously the effective number of independent realizations for the finite time estimator is:

 
N_{eff} = \frac{2T_{int}}{T}
(40)

so that the two expressions are equivalent. Thus, in effect, portions of the record separated by two integral scales behave as though they were statistically independent, at least as far as convergence of finite time estimators is concerned.

Thus what is required for convergence is again, many independent pieces of information. This is illustrated in Figure 5.6. That the length of the recordn should be measured in terms of the integral scale should really be no surprise since it is a measure of the rate at which a process forgets its past.

Example

It is desired to mesure the mean velocity in a turbulent flow to within an rms error of 1% (i.e.  \epsilon = 0.01 ). The expected fluctuation level of the signal is 25% and integral scale is estimated as 100 ms. What is the required averaging time?

From equation 39

 
\begin{matrix}
T  & = & \frac{2T_{int}}{\epsilon^{2}} \frac{var \left[ u \right]}{U^{2}} \\
& = & 2 \times 0.1 \times (0.25)^{2} / (0.01)^{2} = 125 sec \\
\end{matrix}
(41)

Similar considerations apply to any other finite time estimator and equation 55 from chapter Statistical analysis can be applied directly as long as equation 40 is used for the number of independent samples.

It is common common experimental practice to not actually carry out an analog integration. Rather the signal is sampled at fixed intervals in time by digital means and the averages are computed as for an esemble with a finite number of realizations. Regardless of the manner in which the signal is processed, only a finite portion of a stationary time series can be analyzed and the preceding considerations always apply.

It is important to note that data sampled more rapidly than once every two integral scales do not contribute to the convergence of the estimator since they can not be considered independent. If  N is the actual number of samples acquired and  \Delta t is the time between samples, then the effective number of independent realizations is

 
 N_{eff} = \left\{           
              \begin{array}{lll}  
                  N \Delta t /T_{int} & if & \Delta t < 2T_{int} \\                   
                   N & if &  \Delta t \geq  2T_{int} \\
               \end{array}       
     \right.
(42)

It should be clear that if you sample faster than  \Delta t = 2T_{int} you are processing unnecessary data which does not help your statistics converge.

You may wonder why one would ever take data faster than absolutely necessary, since it simply it simply fills up your computer memory with lots of statistically redundant data. When we talk about measuring spectra you will learn that for spectral measurements it is necessary to sample much faster to avoid spactral aliasing. Many wrongly infer that they must sample at these higher rates even when measuring just moments. Obviously this is not the case if you are not measuring spectra.

Random fields of space and time

To this point only temporally varying random fields have been discussed. For turbulence however, random fields can be functions of both space and time. For example, the temperature  \theta could be a random scalar function of time  t and position  \stackrel{\rightarrow}{x} , i.e.,

 
\theta = \theta \left( \stackrel{\rightarrow}{x} , t  \right)
(43)

The velocity is another example of a random vector function of position and time, i.e.,


\stackrel{\rightarrow}{u} = \stackrel{\rightarrow}{u} \left( \stackrel{\rightarrow}{x},t \right)
(44)

or in tensor notation,


u_{i} = u_{i} \left( \stackrel{\rightarrow}{x},t \right)
(45)

In the general case, the ensemble averages of these quantities are functions of both positon and time; i.e.,


\left\langle \theta \left( \stackrel{\rightarrow}{x},t \right) \right\rangle \equiv \Theta \left( \stackrel{\rightarrow}{x},t \right)
(46)

\left\langle u_{i} \left( \stackrel{\rightarrow}{x},t \right) \right\rangle \equiv U_{i} \left( \stackrel{\rightarrow}{x},t \right)
(47)

If only stationary random processes are considered, then the averages do not depend on time and are functions of  \stackrel{\rightarrow}{x} only; i.e.,


\left\langle \theta \left( \stackrel{\rightarrow}{x},t \right) \right\rangle \equiv \Theta \left( \stackrel{\rightarrow}{x} \right)
(48)

\left\langle u_{i} \left( \stackrel{\rightarrow}{x},t \right) \right\rangle \equiv U_{i} \left( \stackrel{\rightarrow}{x}\right)
(49)

Now the averages may not be position dependent either. For example, if the averages are independent of the origin in position, then the field is said to be homogeneous. Homogenity (the noun corresponding to the adjective homogeneous) is exactly analogous to stationarity except that position is now the variable, and not time.

It is, of course, possible (at least in concept) to have homogeneous fields which are either stationary or non stationary. Since position, unlike time, is a vector quantity it is also possible to have only partial homogeneity. For example, a field can be homogeneous in the  x_{1}- and  x_{3}- directions, but not in the  x_{2}- direction so that   U_{i}=U_{i}(X_{2}) only. In fact, it appears to be dynamically impossible to have flows which are honogeneous in all variables and stationary as well, but the concept is useful, nonetheless.

Homogeneity will be seen to have powerful consequences for the equations govering the averaged motion, since the spatial derivative of any averaged quantity must be identically zero. Thus even homogeneity in only one direction can considerably simplify the problem. For example, in the Reynolds stress transport equation, the entire turbulence transport is exactly zero if the field is homogeneous.

Multi-point statistics in homogeneous field

The concept of homogeneity can also be extended to multi-point statistics. Consider for example, the correlation between the velocity at one point and that at another as illustrated in Figure 5.7. If the time dependence is suppressed and the field is assumed statistically homogeneous, this correlation is a function only of the separation of the two points, i.e.,

 
\left\langle u_{i} \left( \stackrel{\rightarrow}{x} , t \right) u_{j} \left( \stackrel{\rightarrow}{x'} , t \right) \right\rangle \equiv B_{i,j} \left( \stackrel{\rightarrow}{r} \right)
(50)

where  \stackrel{\rightarrow}{r} is the separation vector defined by

 
\stackrel{\rightarrow}{r} = \stackrel{\rightarrow}{x'} - \stackrel{\rightarrow}{x}
(51)

or

 
r_{i} = x'_{i} - x_{i}
(52)

Note that the convention we shall follow for vector quantities is that the first subscript on  B_{i,j} is the component of velocity at the first position,  \stackrel{\rightarrow}{x} , and the second subscript is the component of velocity at the second,  \stackrel{\rightarrow}{x'} . For scalar quantities we shall simply put a simbol for the quantity to hold the place. For example, we would write the two-point temperature correlation in a homogeneous field by:

 
\left\langle \theta \left( \stackrel{\rightarrow}{x},t \right) \theta \left( \stackrel{\rightarrow}{x'},t \right) \right\rangle \equiv B_{\theta , \theta} \left( \stackrel{\rightarrow}{r} \right)
(53)

A mixed vector/scalar correlation like the two-point temperature velocity correlation would be written as:

 
\left\langle u_{i} \left(  \stackrel{\rightarrow}{x} , t \right) \theta \left( \stackrel{\rightarrow}{x'},t \right) \right\rangle \equiv B_{i,\theta } \left( \stackrel{\rightarrow}{r} \right)
(54)

On the other hand, if we meant for the temperature to be evaluated at  \stackrel{\rightarrow}{x} and the velocity at  \stackrel{\rightarrow}{x'} we would have to write:

 
\left\langle \theta \left( \stackrel{\rightarrow}{x},t \right) u_{i} \left( \stackrel{\rightarrow}{x'},t \right) \right\rangle \equiv B_{ \theta, i } \left( \stackrel{\rightarrow}{r} \right)
(55)

Now most books don't bother with the subscript notation, and simply give each new correlation a new symbol. At first this seems much simpler; and it is as long as you are only dealing with one or two different correlations. But introduce a few more, then read about a half-dozen pages, and you will find you completely forget what they are or how they were put together. It is usually very important to know exactly what you are talking about, so we will use this comma system to help us remember.

It is easy to see that the consideration of vector quantities raises special considerations. For example, the correlation between a scalar function of position at two points is symmetrical in  \stackrel{\rightarrow}{r} , i.e.,


B_{\theta,\theta} \left( \stackrel{\rightarrow}{r} \right) = B_{\theta,\theta} \left( - \stackrel{\rightarrow}{r} \right)
(56)

This is easy to show from the definition of  B_{\theta,\theta} and the fact that the field is homogeneous. Simply shift each of the position vectors by the same amount  - \stackrel{\rightarrow}{r} as shown in Figure 5.8 to obtain:

 
\begin{matrix}
B_{\theta,\theta}\left( \stackrel{\rightarrow}{r},t \right) & \equiv & \left\langle \theta\left( \stackrel{\rightarrow}{x}, t \right) \theta\left( \stackrel{\rightarrow}{x'}, t \right) \right\rangle \\
& = & \left\langle \theta \left( \stackrel{\rightarrow}{x} - \stackrel{\rightarrow}{r} , t \right) \theta \left( \stackrel{\rightarrow}{x'} - \stackrel{\rightarrow}{r} , t \right) \right\rangle \\
& = & B_{\theta,\theta}\left( - \stackrel{\rightarrow}{r},t \right) \\
\end{matrix}
(57)

since  \stackrel{\rightarrow}{x'} - \stackrel{\rightarrow}{r} = \stackrel{\rightarrow}{x}  ; i.e., the points are reversed and the separation vector is pointing the opposite way.

Such is not the case, in general, for vector functions of position. For example, see if you can prove to yourself the following:


B_{\theta,i} \left( \stackrel{\rightarrow}{r} \right) = B_{i,\theta} \left( - \stackrel{\rightarrow}{r} \right)
(58)

and


B_{i,j} \left( \stackrel{\rightarrow}{r} \right) = B_{j,i} \left( - \stackrel{\rightarrow}{r} \right)
(59)

Clearly the latter is symmetrical in the variable  \stackrel{\rightarrow}{r} only when  i = j .

These properties of the two-point correlation function will be seen to play an important role in determining the interrelations among the different two-point statistical quantities. They will be especially important when we talk about spectral quantities.

Spatial integral and Taylor microscales

Just as for a stationary random process, correlations between spatially varying, but statistically homogeneous, random quantities ultimately go to zero;, i.e., they become uncorrelated as their locations become widely separated. Because position (o relative position) is a vector quantity, however, the correlation the carrelation may die off at different rates in different directions. Thus direction must be an important part of the definitions of the integral scales and microscales.

Consider for example the one-dimensional spatial correlation which is obtained by measuring the correlation between the temperature at two points along a line in the x-direction, say,

 
B^{(1)}_{\theta,\theta} \left( r \right) \equiv \left\langle \theta \left( x_{1} + r , x_{2} , x_{3} , t  \right) \theta \left( x_{1} , x_{2} , x_{3} , t  \right) \right\rangle
(60)

The superscript "(1)" denotes "the coordinate direction in which the separation occurs". This distinguishes it from the vector separation of  B_{\theta,\theta} above. Also, note that the correlation at zero separationis just the variance; i.e.,

 
B^{(1)}_{\theta,\theta} \left( 0 \right) = \left\langle \theta^{2} \right\rangle
(61)

The integral scale in the  x -direction can be defined as:

 
L^{(1)}_{\theta} \equiv \frac{1}{ \left\langle \theta^{2} \right\rangle} \int^{\infty}_{0} \left\langle \theta \left( x + r, y,z,t \right) \theta \left( x,y,z,t \right) \right\rangle dr
(62)

It is clear that there are at least two more integral scales which could be defined by considering separations in the y and z directions. Thus

 
L^{(2)}_{\theta} \equiv \frac{1}{ \left\langle \theta^{2} \right\rangle} \int^{\infty}_{0} \left\langle \theta \left( x,y + r,z,t \right) \theta \left( x,y,z,t \right) \right\rangle dr
(63)

and

 
L^{(3)}_{\theta} \equiv \frac{1}{ \left\langle \theta^{2} \right\rangle} \int^{\infty}_{0} \left\langle \theta \left( x,y,z + r,t \right) \theta \left( x,y,z,t \right) \right\rangle dr
(64)

In fact, an integral scale could be defined for any direction simply by choosing the components of the separation vector  \stackrel{\rightarrow}{r} . This situation is even more complicated when correlations of vector quantities are considered. For example, consider the correlation of the velocity vectors at two points,  B_{i,j} \left( \stackrel{\rightarrow}{r} \right) . Clearly  B_{i,j} \left( \stackrel{\rightarrow}{r} \right) is not a single correlation, but rather nine separate correlations:  B_{1,1} \left( \stackrel{\rightarrow}{r} \right) ,  B_{1,2} \left( \stackrel{\rightarrow}{r} \right) ,  B_{1,3} \left( \stackrel{\rightarrow}{r} \right) ,  B_{2,1} \left( \stackrel{\rightarrow}{r} \right) ,  B_{2,2} \left( \stackrel{\rightarrow}{r} \right) , etc. For each of these an integral scale can be defined once a direction for the separation vector is chosen. For example, the integral scales associated with  B_{1,1} for the principal directions are

 
 L^{(1)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( r,0,0 \right) dr
(65)
 
 L^{(2)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( 0,r,0 \right) dr
(66)
 
 L^{(3)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( 0,0,r \right) dr
(67)

Similar integral scales can be defined for the other componentsof the correlation tensor. Two of particular importance in the development of the turbulence theory are:

 
 L^{(2)}_{1,1} \equiv \frac{1}{\left\langle u^{2}_{1} \right\rangle} \int^{\infty}_{0} B_{1,1} \left( 0,r,0 \right) dr
(68)
 
 L^{(1)}_{2,2} \equiv \frac{1}{\left\langle u^{2}_{2} \right\rangle} \int^{\infty}_{0} B_{2,2} \left( r,0,0 \right) dr
(69)

In general, each of these integral scales will be different, unless restrictions beyond simple homogeneity are placed on the process (e.g., like isotropy discussed below). Thus, it is important to specify precisely which integral scale is being referred to; i.e., which components of the vector quantities are being used and in which direction the integration is being performed.

Similar considerations apply to the Taylor microscales, regardless of whether they are being determined from the correlations at small separations, or from the mean square fluctuating gradients. The two most commonly used Taylor microscales are often referred to as  \lambda_{f} and  \lambda_{g} and are defined by

 
\lambda^{2}_{f} \equiv 2 \frac{ \left\langle u^{2}_{1} \right\rangle }{ \left\langle \left[ \partial u_{1} / \partial x_{1}  \right]^{2} \right\rangle }
(70)

and

 
\lambda^{2}_{g} \equiv 2 \frac{ \left\langle u^{2}_{1} \right\rangle }{ \left\langle \left[ \partial u_{1} / \partial x_{2}  \right]^{2} \right\rangle }
(71)

The subscripts f and g refer to the autocorrelation coefficients defined by:

 
f \left( r \right) \equiv \frac{\left\langle u_{1} \left( x_{1} + r,x_{2},x_{3} \right) u_{1} \left( x_{1},x_{2},x_{3} \right) \right\rangle}{ \left\langle u^{2}_{1} \right\rangle } = \frac{B_{1,1} \left( r,0,0 \right)}{ B_{1,1} \left( 0,0,0 \right) }
(72)

and

 
g \left( r \right) \equiv \frac{\left\langle u_{1} \left( x_{1},x_{2}+r,x_{3} \right) u_{1} \left( x_{1},x_{2},x_{3} \right) \right\rangle}{ \left\langle u^{2}_{1} \right\rangle } = \frac{B_{1,1} \left( 0,r,0 \right)}{ B_{1,1} \left( 0,0,0 \right) }
(73)

It is straightforward to show from the definitions that  \lambda_{f} and  \lambda_{g} are related to the curvature of the  f and  g correlation functions at  r=0 . Specifically,

 
\lambda^{2}_{f}= \frac{2}{d^{2} f / dr^{2} |_{r=0}  }
(74)

and

 
\lambda^{2}_{g}= \frac{2}{d^{2} g / dr^{2} |_{r=0}  }
(75)

Since both  f and  g are symmetrical functions of  r ,  df/dr and  dg/dr must be zero at  r=0 . It follows immediately that the leading  r -dependent term in the expansions about the origin of both autocorrelations are of parabolic form; i.e.,

 
f \left( r \right) = 1 - \frac{r^{2}}{\lambda^{2}_{f}} + \cdots
(76)

and

 
g \left( r \right) = 1 - \frac{r^{2}}{\lambda^{2}_{g}} + \cdots
(77)

This is illustrated in Figure 5.9 which shows that the Taylor microscales are the intersection with the  r -axis of a parabola fitted to the appropriate correlation function at the origin. Fitting a parabola is a common way to determine the Taylor microscale, but to do so you must make sure you resolve accurately to scales much smaller than it (typically an order of magnitude smaller is required). Otherwise you are simply determining the spatial filtering of your probe or numerical algorithm.


Credits

This text was based on "Lectures in Turbulence for the 21st Century" by Professor William K. George, Professor of Turbulence, Chalmers University of Technology, Gothenburg, Sweden.

Turbulence kinetic energy · Homogeneous turbulence
Turbulence kinetic energy · Introduction to turbulence · Homogeneous turbulence
My wiki