# Error Analysis in an Undergraduate Science Laboratory

%%% Formula reference page.

## Contents

- 1 Error Analysis
- 2 First Experiment
- 3 Second Experiment
- 4 Third Experiment
- 5 Fourth Experiment
- 6 Lab involving multiple measurements of same quantity
- 7 Lab involving a sine in the formula

## Error Analysis[edit]

All measurements, however carefully made, give a range of possible values referred to as an uncertainty or error. Since all of science depends on measurements, it is important to understand uncertainties and where they come from. Error analysis is the set of techniques for dealing with them.

In science, the word "error" does not take the usual meaning of "mistake". Instead, it is often used interchangeably with "uncertainty" when talking about the result of a measurement. There are many aspects to error analysis and it generally features in some form in every lab throughout a course.

### Inevitability[edit]

In the first lab, we will measure the length of a pendulum. Without a ruler you might compare it to your own height and (after converting to meters) make an estimate of 1.5m. Of course, this is only approximate. To quantify this, you might say that you are sure it is not less than 1.3m and not more than 1.7m. With a ruler, you measure 1.62m. This is a much better estimate, but there is still uncertainty. You couldn't possibly say that the pendulum isn't 1.62001m long. If you became obsessed with finding the exact length of the pendulum you could buy a fancy device using a laser, but even this will have an error associated with the wavelength of light.

Also, at this point you would come up against another problem. You would find that the string is slightly stretched when the weight is on it and the length even depends on the temperature or moisture in the room. So which length do you use? This is a problem of *definition*. During lab you might find another example. You might ask whether to measure from the bottom, top or middle of the weight. Sometimes one of the choices is preferable for some reason (in this case the middle because it is the center of mass). However, in general it is more important to be clear about what you mean by "the length of the pendulum" and consistent when taking more than one measurement. Note that the different lengths that you measure from the top, bottom or middle of the weight do not contribute to the error. Error refers to the range of values given by measurements of exactly the same quantity.

### Importance[edit]

In daily life, we usually deal with errors intuitively. If someone says "I'll meet you at 9:00", there is an understanding of what range of times is OK.

However, if you want to know how long it takes to get to the airport by train you might need to think about the range of possible values. You might say "It'll probably take an an hour and a half, but I'll allow two hours." Usually it will take within about 10 minutes of this most probable time. Sometimes it will take a little less than 1hr20, sometimes a little more than 1hr40, but by allowing the most probable time plus three times this uncertainty of 10 minutes you are almost certain to make it. In more technical applications, for example air traffic control, more careful consideration of such uncertainties is essential.

In science, when a new theory overthrows an old one a discussion or debate about relevant errors takes place. %%% Example of Experiment %%% %%% Justifying the Errors %%%

In this course, we will definitely not be able to overthrow established theories. Instead, we will verify them with the best accuracy allowed by our equipment. The first experiment involves measuring the gravitational acceleration g. This may seem pointless since it has clearly been measured with much greater accuracy elsewhere. However, the idea is to make the most accurate possible verification using very simple apparatus which can be a genuinely interesting exercise.

### This Course[edit]

There are several techniques that we will use to deal with errors. All of them are well explained, with more formal justifications, in *An Introduction to Error Analysis* by John Taylor. Sections appear during the course as follows:

Lab 1
How to estimate error when reading scales, on repeated measurements and in calculations. How to write down measurements and draw conclusions from them.

Lab 2 Errors on graphs and vector diagrams. Best-fit lines.

Lab 3 Error formulae and how they can save time over plugging in limits. Propagating errors for e = |v_f / v_i|.

Lab 4 (Projectile Motion) Neglecting small errors and approximating big errors.

Lab involving multiple measurements of same quantity Random vs. Systematic error.

Lab involving a sine (possibly not til the second semester) Calculus and how it can save time calculating formulae.

%%%%%%%%% I left a section for the first lab that involves comparison of two measured quantities which are predicted to be equal as opposed to comparison with accepted value.

## First Experiment[edit]

The goal of each lab is to demonstrate that your equipment is working as well as you could reasonably expect and that the relevant physical law describes it reasonably well. The role of error analysis is to quantify what "reasonably" means. In many labs during the course, including this first one, this is done by first measuring a physical quantity (all measurements give a range of possible values) and then seeing if the accepted value lies within that range.

Estimating your uncertainties is not always easy. Choosing large uncertainties makes it more likely that the accepted value will lie in the range. However, the smaller the uncertainties the better the experiment. Here are some guidelines:

### How to estimate error when reading scales[edit]

After you have measured your pendulum, imagine that another group measured it again with your ruler but without knowing your result. Would you be surprised if they got a value 1mm different to yours? What about 5mm? The largest change that would //not// make you question if they had make a mistake is a good general guideline for the amount of error you should use.

1. Use a range the same as the scale markings

%%% Diagram of ruler (the one in the current lab manual is pretty good) %%%

When you put the pendulum string up against a ruler you must decide which mark is closest to the end. In the diagram, it is a close call, but we can definitely say that our measurement is between 46.4cm and 46.6cm. These are called the lower and upper limits or, if you are feeling less certain about it, the lowest and highest probable values. Our best estimate is in the middle, 46.5cm. You might decide that no more accurate estimation is possible, so your range of 2mm is the same as the scale markings.

2. Use a range larger than the scale markings

When you are timing the swing of the pendulum the first reading of your stop clock might be 1.43s. However, even before doing the next one you know that it won't be exactly the same. Most of the uncertainty comes from your reaction time and it is much larger than the scale markings (1/100 seconds). In this case, without taking repeated readings you can only really guess what the uncertainty is. In this experiment, we will try to get a feel for it and reduce it if possible.

A vivid example you will encounter later in the course is that of trying to measure the length of a spring that is jiggling. If the end of the spring keeps moving over a range of 5mm then this is the uncertainty. If you can get the oscillations to die down then you can reduce the uncertainty.

3. Use a range less than the scale markings

It doesn't often happen, but sometimes you can do better than simply choose which mark is closest. In the above diagram, you might claim a range of 46.45 to 46.55cm.

### How to estimate error on repeated measurements (2/3 Method)[edit]

When you have timed the swing of the pendulum a few times you want to find a best estimate and a probable range for "the time of one pendulum swing". We will use these values (in seconds) as an example:

1.43, 1.52, 1.46, 1.64, 1.53, 1.57

The best estimate is the average or mean value which is 1.53s. The probable range should include about 2/3 of the values. A quick way to do this is to ignore the largest 1/6 and the smallest 1/6 and then find the range of what is left. In the example the range is 1.57-1.46=0.11s. If you have 25 values, ignore 4 large and 4 small.

The reasons for choosing a range that includes 2/3 of the values come from the underlying statistics of the Normal Distribution. The difference between each measurement and the mean of many measurements is called the "deviation". The 2/3 method gives us a quick approximation of a kind of average deviation known as the "stardard deviation". This choice allows us to accurately add and multiply errors and has the advantage that the range is not affected much by outliers and occasional mistakes.

### How to estimate error in calculations (Plug-in Limits Method)[edit]

Most interesting quantities cannot be measured directly. They are the result of a calculation based on one or more direct measurements. A simple example is the area of a rectangle. We must measure the length and width and multiply them. Large length and large width give a large area. Therefore, the "highest probable value" of the area is equal to the highest probable value of the length multiplied by the highest probable value of the width. Similarly, the "lowest probable value" of the area is equal to product of the two lowest probable values.

To measure the gravitational acceleration using a pendulum we must first measure its length and time for one swing and then use the formula:

%%% Formula %%%

There is one extra complication when working out the probable range. Since the time for one swing t is on the bottom of the formula, a small value will make g large. Therefore, to find the highest probable value for g, you should plug into the formula the highest value for l and the //lowest// value for t.

### How to write the result of a measurement[edit]

The correct way to report //any// measurement is to state your best estimate of the quantity and also a range of values that you are confident it lies in. For the example of the length given above, one way to write it is:

Best estimate: 46.5cm Probable range: 46.4 to 46.6cm

This way is most convenient for the Plug-in Limits Method since the upper and lower limits (46.6cm and 46.4cm respectively) are explicit. However, results of measurements are more commonly written in the more compact form:

where the value 0.1cm is the "error". Note that the "error" is half the "range". For the example of times given above we can write:

Best estimate: 1.53s Probable range: 1.46 to 1.57s

In this case, the limits are not equally spaced from the best estimate so again we use half the range (1.57-1.46)/2 = 0.06 for the error:

#### Significant Figures[edit]

Since errors are estimated, in this course you should always round errors to one significant figure.

This rules also applies to errors that you calculate. It means that many of the calculations boil down to adding and multiplying single digit numbers which hopefully can mostly be done in your head. Using approximate calculations is useful in many walks of life.

Once you have a value for the error, you must consider which figures in the best estimate are significant. Writing the result of a measurement as:

is ridiculous since it means the value can be as high as 2.1s or as low as 0.9s. The last two digits have no significance at all. It should be written instead as:

The general rule is:

The last significant figure of the best estimate should be in the same decimal position as the error.

### Drawing Conclusions[edit]

Following these guidelines, you can write your measurement in a truly meaningful way, but it is still not very interesting on its own. In order to draw a conclusion from your experiment, you must compare //two or more measurements//. There are three general ways that we will do this in this course:

1. Comparing a measured value with an accepted value 2. Comparing two measured values predicted to be equal 3. Verifying a relationship with a graph

We will discuss the first way for this experiment and the other two in later sections. Put briefly, your experiment is a success if the accepted value lies within the range given by your measurement.

#### Comparing a measured value with an accepted value[edit]

If the result of your measurement is written the first way, with a probable range, you can immediately see if the accepted value is between the upper and lower limits. If it is written the second way, with an error, then you can calculate the difference between your best estimate and the accepted value. This is known as the "discrepancy" and you should compare it to your calculated error. There are three possible outcomes:

1. If the discrepancy is smaller than the error then clearly the accepted value is within your measured range and you can claim that your experiment is a success. There are various technical terms to describe this situation. You can say that the two measurements are "consistent". Alternatively, you can say that the two values are the same "within error" or that the discrepancy between them is "insignificant".

2. If it is only just outside the range (let's say, if the discrepancy is less than twice the error), then you can still regard your experiment as satisfactory. The ranges that we use are a little blurry related to the fact that they include about 2/3 of the time values. How much longer or shorter would your times have to be get rid of the discrepancy? Try to remember exactly how you released the pendulum and stopped the clock. Can you explain the discrepancy this way? Write something about this and then your report is complete.

3. If your accepted value is well outside the range this indicates some kind of problem with your experiment or your calculations. Unfortunately these are often difficult to spot. You should talk to your TA and mention any areas you think might have gone wrong. A sensible discussion of the possible causes in your report can fully make up for a bad result.

It might be tempting to ignore errors and say that the two values are "about the same" but this is really just a statement of your intuition about uncertainties. It is such a letdown, when some more careful consideration and a few lines of calculation can yield an unbiased and objective conclusion.

## Second Experiment[edit]

Errors on graphs and vector diagrams. Best-fit lines.

### Vector Diagrams[edit]

### Graphs[edit]

In the first experiment, we measured the time of swing for //one// length of the string. If we had changed the length of the string, the time of swing would have changed. There is a "relationship" between the two. This week we will use a more powerful method of verifying a different physical law. Remember... there are three:

1. Comparing a measured value with an accepted value 2. Comparing two measured values predicted to be equal 3. Verifying a relationship with a graph

We will verify the relationship F = k x.

### Reasons for plotting graphs, straight lines[edit]

Measured points, however carefully made, will not //exactly// fit on a straight line. We want to find out if this is just because of experimental uncertainties (in which case we have successfully verified the relationship), because we made a mistake, or because F is //not// proportional to x.

#### Possible Relationships[edit]

In this case y means "the quantity on the vertical axis" in this case the force F, and x means "the quantity on the horizontal axis" in this case the extension x.

1. Linear: y = m x + b

In the special case that b = 0, we give the relationship a different name:

2. Proportional: y = m x

Note that this means that if we double F, then x will double.

For both cases there are non-graphical methods to check how well the measurements verify the relationship. However, graphs show it more easily and more clearly.

There are two special cases: 3. Equal: y = x 4. Constant: y = b

Generally we use non-graphical methods for these. For example, in case 4 we have already learned the 2/3 method for quantifying how close to constant a large number of measurements are.

#### Best-fit lines[edit]

The physical law F = kx. Describes the relationship between F and x but since we still don't know k there are a family of lines that we could draw. These lines give the "expected" value of extension for each value of the force.

%%% diagram of proportionality lines %%%

Any of these lines that goes through or close to all the points is OK. (How close? As mentioned last week, if the expected value is within the error bar that's great, but if it is within two deviations from the mean then that's still OK. Beyond that, however, suggests some kind of problem.)

If there are any acceptable lines, there will be a range of lines that are possible.

## Third Experiment[edit]

### Error formulae[edit]

Error formulae and how they can save time over plugging in limits.

### Propagating errors[edit]

Propagating errors for a simple formula such as e = |v_f / v_i|.

## Fourth Experiment[edit]

Neglecting small errors and approximating big errors.

## Lab involving multiple measurements of same quantity[edit]

Random vs. Systematic error.

## Lab involving a sine in the formula[edit]

Calculus and how it can save time calculating formulae.