Sunteți pe pagina 1din 25

Statistics and treatment of experimental data :

 Radioactive decay is a random process. Any measurement based on observing


the radiation emitted in nuclear decay is subject to some degree of statistical
fluctuation.

 The quantities which are measured and subject to the fluctuations are, for
example energy, number of particles emitted in a decay process or a nuclear
reaction.

 The fluctuations can be quantified and compared with the predictions of


statistical models. Once the uncertainties are understood, the conclusion from
the result can be inferred.

 There can be two possibilities


i) Careful organization and characterization of the experimental data. Find the
fluctuations and check if they can be described by the known theoretical models.
ii) If the known models do not describe the behavior, it may be desirable to repeat
the measurements to check instrument was faulty and the data obtained cannot be
relied on.
On the other hand, it is also possible that, the choice of the model was not
correct.

 It is very important, therefore, to know how to make a quantitative judgment


when the experimental data are compared with the model.

 Here we are dealing with the random processes. The outcomes of such
processes,
For example, the throwing of the die or the number of disintegrations in a
particular radioactive source in a period of time ‘T’, fluctuate from trial to trial such
that it is impossible to predict with certainty what the result will be for a given trial.

 The random variable can be discrete or continues.


 Here one can start the discussion with the discrete variable, then can be
generalized for a continuous variable.
′ ′ independent measurements: 1, 2 ,……….,
1 N
Mean x e   xi   x F ( x )
N i 1
( )  Frequency distribution function

2 1 N
Variance S   ( xi  xe )2 measures the fluctuation from mean
N  1 i 1

x e : Mean obtained from the experimental data points, where ‘ ’ is finite.

 To be precise, the sample variance is more fundamentally defined as the average


value of squared deviation of each data point from the true mean value x e that
would be derived if an infinite number of data points were accumulated.

2 1 N
   ( xi  x)2
N  1 i 1
 The data are normally plotted as the frequency distribution.

 The organization of data is done with two


conclusions:

i) Any set of data can be completely described


by it’s frequency distribution function F(x).

ii) Two properties of this frequency function


are of particular interest: the experimental
mean and the sample variance.

 Experimental mean is the value about which the distribution is centered.

 Sample variance is a measure of the width of the distribution, or the amount of


internal fluctuation in the data.
Statistical models:
 Under certain conditions, we can predict the distribution function that will
describe the result of many repetitions of a given measurement.

 For a given number of trials, we count the number of successes.

 Either trial is a success or not a success.

 The probability of success is a constant for all trials.

Example:
 Rolling a die, definition of success is obtaining six

 Probability of success ‘p’=1/6

 Observing a given radioactive nucleus for a time period ‘t’ Trial

 The number of trials is equivalent to the number of nuclei in the sample under
observation.
 The measurement consists of counting those nuclei that undergo decay.

 The probability of success of any one trial is ‘p’.

 In the radioactive decay, this probability is equal to 1  e t ; where  is the


decay constant.

1) Binomial distribution:
Widely applicable to all constant –p processes. Unfortunately, it is
computationally cumbersome in radioactive decay, where the number of nuclei
is always very large.
n
P(x) = p x (1  p )n  x for 'n' trials
(n  x) x
2) Poisson distribution:
Mathematical simplification of binomial distribution under conditions that the
success probability ‘p’ is small and constant.

i) Observation time is small compared to the half-life of the source.

ii) The number of radioactive nuclei remains essentially constant during the
observation Probability of recording a count from a given nucleus is small.

( pn) x e  pn ( x) x e x
P(x) = P(x) = ; x  np
x x

3) Gaussian distribution:

 This is further simplification if the average number of success is relatively large.

 Both the above distribution reduce to Gaussian distribution when the number
of trials is very large.
 We discuss here the applicability of Poisson and Gaussian distribution for the
nuclear counting.
 Note that the binomial distribution requires two parameters:
i) The number of trials ‘n’
ii) The probability of success, p.

 A significant simplification has occurred in deriving the Poisson distribution; only


one parameter is required which is the product of n and p i.e. np.

 So, if we know the mean value, we can construct the entire distribution.

 Some properties of Poisson distribution:


n
x   x P( x)  pn
x 0

n
   ( x  x) 2 P( x)  pn
2

x 0

2  x
For the low mean value, the distribution is quite
asymmetrical although centered at mean.
Gaussian distribution:

 The Poisson distribution holds as a mathematical simplification to the binomial


distribution in the limit p<<1.

 If in addition, the mean value is large, additional simplification occurs which leads
to Gaussian distribution

1  ( x  x) 2 
P(x) = exp   
2 x  2 x 

 The distribution is characterized by a single parameterx , which is given by the


product np.

 The predicted variance  2 is again equal to the mean valuex.

 The distribution is symmetric about the mean valuex.


Applications of statistical models:

1) To check if the observed fluctuations are consistent with expected statistical


fluctuations:

 Any abnormal fluctuation will make sure that there is no fault in the operation of
the experimental set up.

Procedure:
i) Take ‘N’ identical independent measurements of the same physical quantity.
ii) Plot the distribution function F(x) vs. x.
iii) Find meanxe and variance s2.
 Next task is to match the experimental data with an appropriate statistical model.
Note that ‘N’ should be reasonably large.

 We will want to match to either a Poisson or Gaussian distribution (depending on


how large the mean value is), either of which is fully specified by it’s own mean
valuex.
 Set x  x e

 F(x) should be an approximation to P(x) provided the statistical model accurately


describes the distribution from which the data is arisen.

 One method is to superimpose F(x) with P(x) and compare the shape and
amplitude of the two distribution.

 Such a comparison is only qualitative.

 For quantitative comparison, calculate s2 from the experimental data and compare
with the of the 2 proposed statistical distribution.

2) Estimation of the precision of a single measurement:

 We have a single measurement and would like to associate an error with it.

 In other words, we would like to have an estimate of the variance if we were to


repeat the measurement many times.

 Assume Poisson or Gaussian distribution.


 Assume that the measured value of x =x of the theoretical distribution.

 Now thatx is known, the entire theoretical probability distribution can be


obtained.

 The expected sample variance s2  2 of the statistical distribution from which we


think the measurement ‘x’ is drawn.

 x x

 The error is x   or x  x will contain the true mean with 68% probability for
Gaussian distribution. That means, the error is one value of the standard error.

 If we say that 99% probability that the true mean is included, the quoted error
should be 2.58.
Caution:

 The above discussion or conclusions can be applied only to a measurement of a


number of successes.

 In radioactive decay or nuclear counting, we may apply   x only if ‘x’


represents a counted number of successes, that is, a number of events over a
given observation time recorded from a detector.

 In other words, we cannot associate the standard deviation  with the square root
of any quantity that is not directly measured number of counts.

 For example, the association does not apply to


i) Counting rate,
ii) Sums or difference of counts,
iii) Average of independent counts,
iv) Any derived quantity.

 The error in the above listed quantities should be obtained by the method of
“error propagation”.
Error propagation:

 It can be shown that the error in any derived quantity u(x, y, z……).
2 2
 u   u 
 2u    2 x     2 y  .........
 x   y 

 The variables x, y, z must be chosen so that they are truly independent in order to
avoid the effects of correlation.
It means that the same specific count should not contribute to the values of
more than one such variable.

Examples:

1) Sum and difference 2) Multiplication or division by a constant

u  x  y, u  x y u  xy

u   x 2   y 2 u  A x
3) Multiplication or division of counts 4) Mean value of multiple independent
counts
u  xy
2 2 2   x1  x2  ..........
 u    x    y 
       
 u   x   y     x
N

5) Combination of independent measurements with unequal errors.

 If ‘N’ independent measurements of the same quantity have been carried out
and they do not all have nearly the same associated precision, then a simple
average no longer is the optimal way to calculate a single ‘best value’.

We instead give more weight to those measurements with small values


of x (the standard deviation associated with xi) and less weight to
measurements for which this estimated error is large.
N
 ai xi
i1
x  N
 ai
i1
 Criterion by which the weighing factors ‘ai‘ should be chosen in order to
minimize the expected error in <x>.
N
 Let    ai
i 1
2
2
N  x  2
x   
 x
i 1  xi 
i

1 N 2 2
 2  ai  x
 i 1 i

1 N
 2  ,    ai 2  x 2
 i 1
i

2
 In order to minimize  x , we must minimize  x

 x 2
i.e. 0
a j
 1
 We get a j  
 x 2 j

 If we choose to normalize the weighting coefficients


N
 ai    1
i 1

aj 
x 2 j

2 1
  
N
2
N 1 
     2  x    2 
i 1   x   i 1  x  i
   i  i

 Therefore, the proper choice for the normalized weighting coefficient for ‘xj’ is
1
1 N 1 
aj  2   2 

 x  i 1  x 
j i
Optimization of counting experiments:

 The principle of error propagation can be applied in the design of counting


experiment to minimize the associated statistical uncertainty.

 Consider long-lived radioactive source

S  Counting rate due to the source alone without background

B  Counting rate due to background

TS+B  time for counting source + background

TB  time for counting background only in the absence of source.

 The net rate due to source alone

N1 N 2
S 
TS B TB
N1: total counts for TS+B N2: total counts for TB
1
S B B  2
 Error propagation
S      (1)
 TS B TB 

 Assume that a fixed total time T= TS+B + TB is available to us

SB B
2 S d  S   2
dTS  B  2 dTB
TS  B TB
 Set dS =0 to find the optimum condition.
Also dTS+B + dTB =0 because T is constant

TS B S B
  (2)
TB Opt.
B

 A figure of merit that can be used to characterize this type of counting


experiment is the inverse of the total time, or 1/T, required to determine S to within
a given statistical accuracy.

Significance: To get the desired accuracy without having ‘T’ very large
 If certain parameters of the experiment (such as detector size and pulse
acceptance criteria) can be varied, the optimal choice should correspond to
maximizing this figure of merit.

 Combining, equation (1) and (2)


1 S2 S
 2 2
Where,  
T S
 SB  B 
 This equation predicts the attainable statistical accuracy (in terms of the
fractional standard deviation ‘’) when a total time T is available
1
 If S >> B,   2S
T
 In this limit, the statistical influence of background is negligible. The figure of
merit 1/T is maximized simply by choosing all experimental parameters to
maximize S, the rate due to source alone.
 The opposite extreme of a small source rate in a much larger background (S >>B)
is a typical of low – level radioactivity measurements.
2
1 2 S

T 4B
 The figure of merit is maximized by choosing experimental conditions so that the
ratio S2/B is maximized.
Distribution of time intervals:

 Time intervals between the random events.

 Assume that the probability of occurrence per unit time is constant.

 This condition will be satisfied by long-lived radioactive source.


dN
 r ; rdt is the differential probability, that an event will take place in the
dT
differential time dt.
Average rate in which
events are occurring

 Interval between the successive events:

 Let an event has occurred at time t=0

 What is the probability that the event will take place within a differential time dt
after a time interval of length ‘t’ 
 Two independent processes must take place:
No events may occur within the time interval ‘0’ to ‘t’, but an event must take
place in the next differential time increment dt.

Probability of next event Probability of no Probability of an


taking place in dt after = events during  event in dt
time ‘t’ ‘0’ to ‘t’

I1(t)dt = P(0)  rdt

(rt)0 e rt
P(0)   e  rt Poisson distribution
0

I1 (t ) dt  re  rt dt distribution

 Most probable interval is zero.


 Average interval length  t  I1 (t ) dt 1


0
t 
 an expected result.
r
 I1 (t ) dt
0
Intervals between scaled events:

 There are some occasions in which a digital “scalar” may be employed to reduce
the rate at which the data are recorded from a detector system.

 A scalar functions as a data buffer by producing an output pulse when ‘N’ input
pulses have been accumulated.

 A time interval of length ‘t’ must be observed over which exactly (N-1) events are
presented to the scalar; and an additional event must occur in the increment dt
following this time interval.


I N (t ) dt  P( N  1)r dt
(rt ) N 1 e  rt
I N (t ) dt  r dt
( N  1)

Interval distribution for N-scaled intervals.


 Important to note that a more uniform intervals that accompany larger values
of ‘N’ 
 t  I N (t ) dt N
0
t 

r
 I N (t ) dt
0

 Most probable interval is

dI N (t )
0
dt
N 1
t most probable 
r

S-ar putea să vă placă și