Sunteți pe pagina 1din 16

Introduction to the Physics Laboratory

In this laboratory youll do experiments that illustrate certain physical


phenomena and give a glimpse of the experimental data supporting physical
laws. I hope you will also experience some of the thrill of discovery that is
the essence of physics.

Error Analysis and Graphs


Experimental data contains uncertainties, and in computing results from
them we wish to preserve the highest honestly allowable precision. Such
calculated results should give both the value found the degree of uncertainty
is this resultant value. Many methods achieving this are found in many
texts; a brief review of some of the simpler techniques is given below.
Types of Errors
Errors may be classified into two kinds, systematic and random.
Systematic errors are likely to be constant and in the same direction. They
are caused by faults in the apparatus or flaws in the observers technique. It
is therefore frequently possible to discover them and reduce their
magnitudes.
As an example where systematic errors would influence a measurement,
consider the measurement of length using a meter stick. If the meter stick
had a distorted scale because it was badly made, or its length varied with
environmental humidity or temperature, we would incur a systematic error in
our length measurements if we were unaware of these facts.
Random errors are those errors that are produced by unpredictable and
unknown variations in the total experimental process even when one does
the experiment as carefully as is humanly possible. These could be due to
fluctuations in the line voltage, temperature changes, mechanical vibrations,
or any of the many physical variations that may be inherent in the equipment
or any other aspect of the measurement process. The good news with
random errors, as opposed to systematic errors, is that they can be dealt with
in a consistent, statistical fashion. If enough measurements are taken, a
histogram of the results should look like a bell-shaped curve (or Gaussian
curve) with the mean, or true, value at the peak of the curve.
Accuracy and Precision
The central point to experimental physical science is the measurement of
physical quantities. It is assumed that there exists a true value for any
physical quantity, and the measurement process is an attempt to discover that
true value. On the other hand, it is not assumed that the process will be
perfect and lead to the exact true value. Instead, it is expected that there will

be some difference between the true value and the measured value. The
terms accuracy and precision are used to describe different aspects of the
difference between them. From a scientific point of view, these have very
different meanings.
The accuracy of a measurement is determined by how close the result of the
measurement is to the true value. For example, several experiments
determine a value for the acceleration due to gravity. For this case the
accuracy of the result is decided by how close it is to the true value of 9.80
m/s2. For several of the laboratory experiments, though, the true value of the
measured quantity is not known and the accuracy of the experiment cannot
be determined from the available data.
The precision of a measurement refers essentially to how many significant
digits there are in the result. It is an indication also of how reproducible the
results are when measurements of some quantity are repeated. When
repeated measurements of some quantity are made, the mean of those
measurements is considered to be the best estimate of the true value. The
smaller the variation of the individual measurements from the mean, the
more precise the quoted value of the mean is considered to be. This idea
about the relationship between the size of the variations from the mean and
the precision of the measurement shall be elaborated later.
Lets look at the following table of values acceleration due to gravity
measured by four students:
Measurement
1
Measurement
2
Measurement
3
Mean

Alf
7.83

dev Beth
1.60 9.53

dev Carl
0.27 8.70

dev Dee
0.04 9.72

dev
0.04

11.61

2.18

9.38

0.12 8.75

0.01 9.86

0.10

8.85

0.58 8.87

0.39 8.77

0.03 9.70

0.06

9.43

9.26

8.74

9.76

dev = deviation from the mean value

Notice by the definitions of accuracy and precision that Dees value of 9.76
is the most accurate while Carls is the least accurate, and Carls value is the
most precise while Alfs is the least precise.
Notice the interplay between the concepts of accuracy and precision that
must be considered. If a measurement appears to be very accurate, but the
precision is poor, the question arises whether or not the results are really
meaningful. Consider Alfs mean of 9.43, which differs from the true value
of 9.80 by only 0.37 and thus appears to be quite accurate. However, all of
his measurements have deviations greater than 0.37, and two of his
deviations are much larger than 0.37. It seems much more likely than that
Alfs mean of 9.43 is due to luck than to a careful measurement. If seems
likely, however, that Dees mean of 9.76 is meaningful because the
deviations of her individual measurements from the mean are small. In other
words, unless a measurement has high precision it cannot really be
considered to be accurate.
An examination of the significant figures given in this data leads to
essentially the same evaluation of each students data. Consider Alfs data,
which indicates by the values stated for the individual measurements that
two places to the right the decimal point are significant. However, that
conclusion is not supported by the fact that his deviations occur in the first
digit to the left of the decimal point. On the other hand, Dees results show
deviations in the second place to the right of the decimal point in agreement
with the fact that two places to the right of the decimal are given as
significant in the measured values. Thus from another point of view, Dees
results are seen as meaningful, but Alfs are questionable.
Carls results, on the other hand, are an example of a situation that is
common the interplay between accuracy and precision. Carls precision is
extremely high yet his accuracy is not very good. When a measurement has
high precision but poor accuracy, it is often the sign of a systematic error. A
systematic error is an error that tends to be in the same direction for repeated
measurements, giving results that are either consistently above the true value
or consistently below the true value. In many cases such errors are caused
by some flaw in the experimental apparatus, like not calibrating a device
correctly. Another source of a systematic error is failing to take into account
all of the variables that are important in the experiment. For Carl, if all his
value were consistently below the true value, this might represent Carl
forgetting to take into account friction, which would indeed cause all his

values to be low. But since his mean is well above the true value, this points
to a systematic error involving the equipment.
Percent Error and Difference
In several laboratories, the true value of the quantity being measured will be
known. In those cases, the accuracy of the experiment will be determined by
comparing the best estimate of the true value, or experimental value, with
the known true value. This can be done by figuring the percentage deviation
from the known true value (as known as percentage error). If E stands for
experimental value, and K stands for the known value, then:

Percentage error

EK
K

x 100%

In other cases a given quantity will be measured by two different methods.


There will then be two different experimental values, E1 and E2, but the true
value may not be known. For this case the percentage difference between
the two experimental values can be calculated, but note that this tells nothing
about the accuracy of the experiment, but should be a measure of the
precision. The percentage difference is defined by:

E2 E1
Percentage difference
x 100%
E2 E1

Measurement and Significant Figures


Lets say that you need to measure the length of a piece of string with a
meter stick. The meter stick in question has as its smallest markings a

centimeter. You measure the length and find that it falls about halfway 27
and 28 cm. You estimate that the length is 27.5 cm, but the 0.5 cm is not
exact, but a guess, so you could report that the length of the string to be
27.5 0.1 cm. When a number is reported, typically the number of digits
reported is the number known with any certainty. The uncertainty is
generally assumed to be one or two units of the last digit, but may be
different depending on the situation.

When counting the number of significant figures:

All digits 1 through 9 count as significant figures


Zeroes to the left of all of the other digits are not significant
Zeroes between digits are significant
Zeroes to the right of all other digits are significant if after the decimal
point and may or may not be significant if before the decimal point.

For example,
Number
1.2
3.61
19.61
0.017
9.504
0.1020

Number of Significant
Figures
2
3
4
2
4
4

Possible Range of the Real


Measurement
1.1 - 1.3
3.60 3.62
19.60 - 19.62
.0016 - .0018
9.503 9.505
0.1019 0.1021

Sometimes, the number of significant digits can be unclear. For example, if


we write
80
it is not clear whether the zero is significant or not. Is the measurement
between 70 and 90 or between 79 and 81?
If you write 80.0, there are 3 significant figures, because zeroes to the right
of the decimal point are significant, so we would assume a range of 80.1 to
79.9.

Using exponential notation helps removes this ambiguity. For example:


8 x 101 means 8 (1) x 101 = measurement between 70 and 90
8.0 x 101 means 8.0 ( 0.1) x 101 = measurement between 79 and 81
8.00 x 101 means 8.00 ( 0.01) x 101 = measurement between 79.9 and 80.1
Often you will have to mathematically combine various measurement values
with significant figures. To ensure that the result represents the proper value
with the correct amount of uncertainty, these rules should be followed:
1) When adding or subtracting, figures to the right of the last column in
which all figures are significant should be dropped.
2) When multiplying or dividing, retain only as many significant figures
in the result as are contained in the measurement value with the least
number of significant figures.
3) The last significant figure is increased by 1 if the figure beyond it
(which is dropped) is 5 or greater.
These rules apply only to the determination of the number of significant
figures in the final result.
For example:

753.1
37.08
0.697
56.3__
847.177 847.2

327.23
x 36.73
12019.158 12020
8.90906 8.909
36.73 327.23

Propagation of Uncertainties
Assume we have a function of two variables x and y, f(x,y). The variables x
and y are assumed to be independent in the sense that each can vary
arbitrarily without affecting the other. To find the change in f due to small
changes in x and y, we calculate the total differential of f. For example, if f
is a function for determining area, f = xy, then,
df = xdy + ydx

Now, dx is to be associated with our uncertainty in x, and dy with our


uncertainty in y. df will then be associated with our uncertainty in f resulting
from dx and dy. Now a difficulty arises. We do not know whether dx and
dy are positive or negative quantities since we have assumed the
uncertainties to be due to the randomness of nature. Since it is generally
best to assume the worst, both uncertainties are taken to be in the same
direction; hence
f max x y y x

where the bars denote absolute value, and where we have changed our
notation to reflect the fact that our uncertainties are not true differentials,
i.e., we have made the identifications
df f ,

dx x,

dy y

The uncertainty would be given by

f max 65.27 0.005 73.83 0.005 0.7 cm 2


and we would write:
4819.9 0.7 cm2
Here are some formulas for finding the uncertainties in functions of
independent variables whose uncertainties are known:
Functions
f ( x, y ) x y
f ( x, y ) x y

f ( x, y ) x y
f ( x, y ) x
y

f ( x) x k

Uncertainty formula
f x y

f
x
y

f
x
y

f
x
k
f
x

Note that this includes the commonly occurring cases of x2 and x0.5

f ( x ) sin x
f ( x ) cos x

f (cos x ) x
f (sin x ) x

However, if f(x, y) is a complicated function of x and y, it is generally easier


to proceed by calculating f0 = f(x,y), f1= f(x+x), y+y) and f2 = f(x + x, y
y), where x and y are the measured values. (The important point is that
x and y occur in f1 with the same sign, whereas in f2 they occur with
opposite signs.) The uncertainty in f(x,y) is then given by the larger of the
two magnitudes |f0 f1| and |f1 f2|.
If you know calculus, you can just determine the differential form of the
equation, and then divide that by the original equation.
The Mean and The Standard Deviation for Repeated Measurements
For a set of measurements where it is assumed that only random errors are
present, the mean value you determine represents your own best estimate for
the true value for whatever it is you are measuring. The mean is given by
the equation:
1

For example, assume four measurements are made of some quantity x, and
the four results are 18.6, 19.3, 17.7, and 20.4. By the above equation, the
mean value is:
1
18.6 19.3 17.7 20.4 19.0
4

Statistical theory, furthermore, states that the precision of the measurement


can be determined by calculation of a quantity called the standard
deviation from the mean of the measurements. The symbol for standard
deviation from the mean is . In a statistical sense, it gives the probability
that the measurements fall within a certain range of the measured mean and
is defined by the equation:

n 1

x
n

For the data given, the standard deviation is calculated to be:

1
2
2
2
2
(18.6 19.0) (19.3 19.0) (17.7 19.0) ( 20.4 19.0) 1.1
4 1

Probability theory states that approximately 68.3% of all repeated


measurements should fall within a range of plus or minus 19.0 1.1, or
from 17.9 to 20.1)Furthermore, 95.5% of all repeated measurements
should fall within a range of 2 (19.0 2.2, or from 16.8 to 21.2).
Pictorially, the size of the standard deviation shows up in how skinny or
how fat the bell-shaped curve is drawn.
As a final note, 99.73% of all measurements should fall within 3 of the
mean. This implies that if one of the measurements is 3 or farther from the
mean, it is very unlikely that it is not a random error and much more likely
to be a personal error (oops!).
Linear Least Squares Fit and the Correlation Coefficient
In many cases of interest it is assumed that there exists a linear relationship
between two variables. In mathematical terms one can say that the variables
obey an equation of the form:
y mx b

where y is the slope of the line and b is the y-intercept (the value of y at x =
0). Often, though, because of random errors, a graph of the data does not
display a perfectly linear relationship where every data point lies exactly on
a straight line. So it would be convenient to determine the value of m and b
that produces the best straight line fit to the data. Any choice of values for
m and b will produce a straight line, with values of y determined by the
choice of x. For any such straight line, there will be a deviation between
each of the measured ys from the data points and the ys from the straight
line fit at the value of the measured xs from the data points. The linear least
squares fit is that m and b for which the sum of the squares of these
deviations is a minimum. The linear least squares fit process is also called
just linear fit or linear regression.
There is a quantitative measure of how well the data follow the straight line
obtained by the least squares fit. It is given by the value of a quantity called
the correlation coefficient (sometimes referenced as r or R2). This quantity
is a measure of the fit of the data to a straight line with R2 = 1.00 signifying

a perfect correlation, and R2= 0 signifying no correlation at all. Thus, if the


computer calculates an R2 = 0.998, this means the data almost exacts lies
along the least squares fit line; while if the computer calculates r = 0.002
means that the data points on your graph probably look like a shotgun
pattern and the least squares fit line pretty much means nothing and that
your data is not actually linear at all.
Preparing Graphs
A graph is often a useful way to represent data. This section outlines some
of the main considerations in drawing a graph.
Each graph should have a title, e.g. Distance Traveled as a Function of
Time, or simply v vs. t if v and t are defined in the report. Do not forget
to label the coordinate axes with the variables and their units. Choose scales
for the axes which will spread the experiment points over the entire graph.
Generally, it is best to show the origin, but not if the data occupies a narrow
range of values far from the origin. Conventionally, the independent
variable is chosen for the abscissa (horizontal axis) and the dependent
variable for the ordinate (vertical axis). The title traditionally is written as
what is on the ordinate vs. what is on the abscissa.
If possible, draw uncertainty bars on each of your data points. These
uncertainty bars are the graphical representation of the uncertainty in your
data.
Often it is very helpful to fit a line or curve through your data points. The
software programs Excel and Origin 7.0 do an excellent job of this. When
this is done, it is important to make sure that equation for the fit is displayed
on the graph as well as the some sort of quality of fit statistic, like R2 (see
above). For the equation of the fit, it is important to change the variables to
those displayed on your graph.
Lets us consider an example. The velocity of a ball undergoing constant
acceleration has been measured as a function of time. The data are:
Time (sec)
1.0
2.0
3.0
4.0

Velocity (m/s)
3.0
3.7
4.4
6.0

We will assume the uncertainty for all data points to be 0.1.


The graph (using Excel) is shown below:
Velocity of the Ball as a Function of Time
6.5
6

Velocity (m/s)

5.5
5
4.5
4
v = 0.7514t + 2.2086

3.5

R = 0.9986

3
2.5
2
0

Time (sec)

Occasionally it is helpful and/or necessary to extend the line or curve


beyond the range of measurements. This is called an extrapolation and
should be indicated by a dotted line(s).

Laboratory Reports
The following paragraphs will give you an idea of what a report should
contain and what it should accomplish. The order in which the contents
appear is of secondary importance, but they must all be there, clearly stated
and in a manner suited to the particular situation.
The purpose of a lab report such as this is to communicate fully and
coherently the results of an experimental investigation. Persons outside the
immediate group performing the experimental work may be interested in the
results of your work. It is therefore important that these results and your
analysis of them be put forth in a form suitable for such communication.
The effectiveness of this communication will depend on the clarity and
conciseness of your explanation of the work and the facility with which
desired material can be extracted from it.
Although there is no set universal form for reporting laboratory work, due to
the varied nature of experimentation, there are a few basic things which all
technical reports on experimental work should do. A lab report should at the
very beginning tell the reader what the experiment was for; what it
attempted to find out. Secondly, the report should inform the reader, in a
concise way, how the work performed is expected to accomplish this
purpose. Thirdly, where warranted, the report should display calculations
and analysis, charts, and figures, and at times significant raw data. Fourthly,
and most importantly, it should inform the reader of the particular results,
their significance, conclusions which may be drawn from them, and the
justification of such conclusions.
There are in general five major parts to a report even if a formal distinction
is not made between them. These are as follows:
1. Objective: The abstract tells the reader, when he/she first starts
reading, what he/she will find in the report if he/she reads it through.
In one or two sentences it should summarize all topics presented in the
report.

2. Introduction/Theory: The introduction gives the purpose of the


experiment; it also indicates what principles are used, and how, in
order to obtain the results sought.
3. Experimental Procedure: The procedure (which is often combined
with the introduction) gives either a statement of standard methods
used, or if the experimental method is of particular interest, a brief
description of that method. Do not repeat material included in the
descriptive writings distributed for the experiment.
4. Results/Analysis: You should always present:
1) sample calculations to show how you got any derived results;
2) figures and tables representing results either final or
intermediate; and
3) the significant data upon which your work was based.
Repetitive calculations and arithmetic should not be shown, but
samples of repeated calculations and any other calculations of
interest should be presented in such a way that it is clear to any
competent reader just what you are calculating, how you are
calculating it, and what the answer is.
5. Discussion/Conclusion: It is important that final results and any
intermediate results of importance be explicitly presented in the
written body of the discussion. The degree to which the intermediate
results should be included is a matter of judgment on the part of the
author. It is up to him/her to consider how well the report answers
any questions which might arise in the readers mind.
Some analysis and interpretation of your results must be included to
give them meaning. Their significance to the immediate purpose
should be shown clearly; and if they warrant it, one may include the
relationship of these results to the field in general.
Certainly any report of experimental results must contain an
indication and/or discussion of the reliability of the results, including
a quantitative statement thereof. These reliability statements,

combined with theoretical considerations, form a basis for the


justification of your conclusions.

S-ar putea să vă placă și