Documente Academic
Documente Profesional
Documente Cultură
Methods
Layering Uncertainties
Take a standard probability distribution, say the Gaussian. The measure of dispersion, here s, is estimated, and we need to
attach some measure of dispersion around it. The uncertainty about the rate of uncertainty, so to speak, or higher order parameter,
similar to what called the "volatility of volatility" in the lingo of option operators (see Taleb, 1997, Dupire, x, Derman, x) --here
it would be "uncertainty rate about the uncertainty rate". And there is no reason to stop there: we can keep nesting these uncertain-
ties into higher orders, with the uncertainty rate of the uncertainty rate of the uncertainty rate, and so forth. There is no reason to
have certainty anywhere in the process.
Now, for that very reason, this paper shows that, in the absence of knowledge about the structure of higher orders of deviations,
we are forced to use a power-law tails. Most derivations of power law tails have focused on processes (Zipf-Simon preferential
attachment, cumulative advantage, entropy maximization under constraints, etc.) Here we just derive using lack of knowledge
about the rates of knowledge.
Higher order integrals in the Standard Gaussian Case
Define f(m,s,x) as the Gaussian density function for value x with mean m and standard deviation s.
¶ ¶
‡ ,,,‡ fHm, s, xL f Hs, s1 , sL f Hs1 , s2 , s1 L f Hs2 , s3 , s2 L ... „ s „ s1 „ s2 ... „ sN
0 0
The problem is that it is parameter-heavy and requires the specifications of the subordinated distributions. We would need to
specify a measure f for each layer of error rate (and extract the Wiener Chaos terms to be able to add the integrals in place of
summing them). Instead this can be approximated by using its mean deviation, as we will see next.
Simplification using nested series of two-states for s
A quite effective simplification to capture the convexity, the ratio of (or difference between) f(m,s,x) and
¶
Ÿ0 fHm, s, xL f Hs, s1 , sL „ s would be to use a weighted average of values of s, say, for a simple case of one-order stochastic
volatility s(1+a[1]), and s(1-a[1]), 0§ a[1] <1 where a[1] is the proportional mean absolute deviation for s. In other word
a[1] is measure of the absolute error rate for s.
Thus the distribution using the first order stochastic standard deviation can be expressed as:
1
f HxL = 8f Hm, s H1 + a@1DL, xL + f Hm, s H1 - a@1DL, xL<
2
1 1
Now assume uncertainty about the error rate a[1], expressed by a[2]. Thus in place of a[1] we have 2
a[1]( 1+a[2]) and 2
a[1] (1-
a[2]) .
H1 - aH1LL H1 - aH2LL s
H1 - aH1LL s
H1 - aH1LL HaH2L + 1L s
HaH1L + 1L H1 - aH2LL s
HaH1L + 1L s
HaH1L + 1L HaH2L + 1L s
1 2N
f HxL = ‚ fIm, s MiN , xM
2N i=1
N 2N
MN = :‰ Ha HjL TPi, jT + 1L>
i=1
j=1
and T[[i,j]] the element of ith line and jth column of the matrix of the exhaustive combination of N-Tuples of (-1,1),that is the
N-dimentionalvector {1,1,1,...} representing all combinations of 1 and -1.
for N=3
1 1 1 H1 - aH1LL H1 - aH2LL H1 - aH3LL
1 1 -1 H1 - aH1LL H1 - aH2LL HaH3L + 1L
1 -1 1 H1 - aH1LL HaH2L + 1L H1 - aH3LL
1 -1 -1 H1 - aH1LL HaH2L + 1L HaH3L + 1L
T= and M 3 =
-1 1 1 HaH1L + 1L H1 - aH2LL H1 - aH3LL
-1 1 -1 HaH1L + 1L H1 - aH2LL HaH3L + 1L
-1 -1 1 HaH1L + 1L HaH2L + 1L H1 - aH3LL
-1 -1 -1 HaH1L + 1L HaH2L + 1L HaH3L + 1L
0.6
0.5
0.4
0.3
0.2
0.1
-6 -4 -2 2 4 6
1
Figure x, Thicker tails (higher peaks) for higher values of N; here N=0,5,25,50, all values of a=
5
Order Moment
1 0
N
2 I1 + a2 M s2
3 0
N
4 3 I1 + 6 a2 + a4 M s4
5 0
N
6 15 I1 + 15 a2 + 15 a4 + a6 M s6
7 0
N
8 105 I1 + 28 a2 + 70 a4 + 28 a6 + a8 M s8
Consequences
We can see here how, for a constant a > 0, and in the more general case with variable a where a[n] ¥ a[n-1], how the moments
explode.
We can thus prove the following:
N
A- Even the smallest value of a >0, since I1 + a2 M is unbounded, leads to the second moment going to infinity (though
not the first) when Nض. So something as small as a .001% error rate will still lead to explosion of moments and
invalidation of the use of the class of 2 distributions.
B- We need the use of power laws for epistemic reasons. [I haven’t tried to prove it yet, just numerical methods]
1
a= , N=1,5,10,25,50
10
Log PrHxL
1
10-19
10-38
10-57
10-76
10-95
Log x
10 15 20 30 50
Figure x - LogLog Plot of the convergence to a power laws as N rises. Here all a= 1/10
When the higher order of a[i] decline, then the moments tend to be capped.
Take a “bleed” of higher order errors at the rate l, 0§ l < 1 , so a[N] = l a[N-1], hence a[N] = lN a[1], with a[1] the conven-
tional intensity of stochastic standard deviation.
With N=2 , the second moment becomes:
s2 I1 + a@1D2 M I1 + l2 a@1D2 M
With N=3,
s2 I1 + a@1D2 M I1 + l2 a@1D2 M I1 + l4 a@1D2 M
finally, for the general case:
N-1
s2 I1 + a@1D2 M ‰ I1 + l2 i a@1D2 M
i=1
N-1
Using the Q - Pochhammer symbol Ha; qLN = ‰I1 - aqi M
i=1
M2HNL = s2 I-a@1D2 ; l2 MN
Il2 ; l2 M2 IaH1L2 ; l2 M¶
Limit M2 HNLNض = s2
2
Il2 - 1M Il2 + 1M
So the limiting second moment for l=.9 and a[1]=.2 is just 1.28 s2 , a significant but relatively benign convexity bias.
As to the fourth moment:
By recursion:
N-1
3 s4 ‰ I1 + 6 l2 i a@1D2 + l4 i a@1D4 M
i=0
So the limiting fourth moment for l=.9 and a[1]=.2 is just 9.88 s , more than 3 times the Gaussian’s (3 s4 ), but still finite
4
fourth moment. For small values of a and values of l close to 1, the fourth moment collapses to the level of a Gaussian.