Documente Academic
Documente Profesional
Documente Cultură
The quantities which are measured and subject to the fluctuations are, for
example energy, number of particles emitted in a decay process or a nuclear
reaction.
Here we are dealing with the random processes. The outcomes of such
processes,
For example, the throwing of the die or the number of disintegrations in a
particular radioactive source in a period of time ‘T’, fluctuate from trial to trial such
that it is impossible to predict with certainty what the result will be for a given trial.
2 1 N
Variance S ( xi xe )2 measures the fluctuation from mean
N 1 i 1
2 1 N
( xi x)2
N 1 i 1
The data are normally plotted as the frequency distribution.
Example:
Rolling a die, definition of success is obtaining six
The number of trials is equivalent to the number of nuclei in the sample under
observation.
The measurement consists of counting those nuclei that undergo decay.
1) Binomial distribution:
Widely applicable to all constant –p processes. Unfortunately, it is
computationally cumbersome in radioactive decay, where the number of nuclei
is always very large.
n
P(x) = p x (1 p )n x for 'n' trials
(n x) x
2) Poisson distribution:
Mathematical simplification of binomial distribution under conditions that the
success probability ‘p’ is small and constant.
ii) The number of radioactive nuclei remains essentially constant during the
observation Probability of recording a count from a given nucleus is small.
( pn) x e pn ( x) x e x
P(x) = P(x) = ; x np
x x
3) Gaussian distribution:
Both the above distribution reduce to Gaussian distribution when the number
of trials is very large.
We discuss here the applicability of Poisson and Gaussian distribution for the
nuclear counting.
Note that the binomial distribution requires two parameters:
i) The number of trials ‘n’
ii) The probability of success, p.
So, if we know the mean value, we can construct the entire distribution.
n
( x x) 2 P( x) pn
2
x 0
2 x
For the low mean value, the distribution is quite
asymmetrical although centered at mean.
Gaussian distribution:
If in addition, the mean value is large, additional simplification occurs which leads
to Gaussian distribution
1 ( x x) 2
P(x) = exp
2 x 2 x
Any abnormal fluctuation will make sure that there is no fault in the operation of
the experimental set up.
Procedure:
i) Take ‘N’ identical independent measurements of the same physical quantity.
ii) Plot the distribution function F(x) vs. x.
iii) Find meanxe and variance s2.
Next task is to match the experimental data with an appropriate statistical model.
Note that ‘N’ should be reasonably large.
One method is to superimpose F(x) with P(x) and compare the shape and
amplitude of the two distribution.
For quantitative comparison, calculate s2 from the experimental data and compare
with the of the 2 proposed statistical distribution.
We have a single measurement and would like to associate an error with it.
x x
The error is x or x x will contain the true mean with 68% probability for
Gaussian distribution. That means, the error is one value of the standard error.
If we say that 99% probability that the true mean is included, the quoted error
should be 2.58.
Caution:
In other words, we cannot associate the standard deviation with the square root
of any quantity that is not directly measured number of counts.
The error in the above listed quantities should be obtained by the method of
“error propagation”.
Error propagation:
It can be shown that the error in any derived quantity u(x, y, z……).
2 2
u u
2u 2 x 2 y .........
x y
The variables x, y, z must be chosen so that they are truly independent in order to
avoid the effects of correlation.
It means that the same specific count should not contribute to the values of
more than one such variable.
Examples:
u x y, u x y u xy
u x 2 y 2 u A x
3) Multiplication or division of counts 4) Mean value of multiple independent
counts
u xy
2 2 2 x1 x2 ..........
u x y
u x y x
N
If ‘N’ independent measurements of the same quantity have been carried out
and they do not all have nearly the same associated precision, then a simple
average no longer is the optimal way to calculate a single ‘best value’.
1 N 2 2
2 ai x
i 1 i
1 N
2 , ai 2 x 2
i 1
i
2
In order to minimize x , we must minimize x
x 2
i.e. 0
a j
1
We get a j
x 2 j
2 1
N
2
N 1
2 x 2
i 1 x i 1 x i
i i
Therefore, the proper choice for the normalized weighting coefficient for ‘xj’ is
1
1 N 1
aj 2 2
x i 1 x
j i
Optimization of counting experiments:
N1 N 2
S
TS B TB
N1: total counts for TS+B N2: total counts for TB
1
S B B 2
Error propagation
S (1)
TS B TB
SB B
2 S d S 2
dTS B 2 dTB
TS B TB
Set dS =0 to find the optimum condition.
Also dTS+B + dTB =0 because T is constant
TS B S B
(2)
TB Opt.
B
Significance: To get the desired accuracy without having ‘T’ very large
If certain parameters of the experiment (such as detector size and pulse
acceptance criteria) can be varied, the optimal choice should correspond to
maximizing this figure of merit.
What is the probability that the event will take place within a differential time dt
after a time interval of length ‘t’
Two independent processes must take place:
No events may occur within the time interval ‘0’ to ‘t’, but an event must take
place in the next differential time increment dt.
(rt)0 e rt
P(0) e rt Poisson distribution
0
I1 (t ) dt re rt dt distribution
There are some occasions in which a digital “scalar” may be employed to reduce
the rate at which the data are recorded from a detector system.
A scalar functions as a data buffer by producing an output pulse when ‘N’ input
pulses have been accumulated.
A time interval of length ‘t’ must be observed over which exactly (N-1) events are
presented to the scalar; and an additional event must occur in the increment dt
following this time interval.
I N (t ) dt P( N 1)r dt
(rt ) N 1 e rt
I N (t ) dt r dt
( N 1)
dI N (t )
0
dt
N 1
t most probable
r