Q:

I want you to give me some good source that says how to calculate errors or uncertainty in experiments
I have searched over the internet, but I found many results that have confused me
In college they taught us that errors or uncertainty in case of direct measurement for many times is calculated like this:
let L be the length of a ruler. And we have measured it 6 times and founded these results:
L1 = 5.12, L2 = 5.15, L3 = 5.16, L4 = 5.19, L5 = 5.11, L6 = 5.18
Now we calculate the average L = (30.91/6) = 5.151667
Then we do the uncertainty of every measurement which is (the read value"Li" - the average "L" )
ΔLi = |Li - L(average)|
ΔL1 = 0.031667, ΔL2 = 0.031667, ΔL3 = 0.001667, ΔL4 = 0.008333, ΔL5 = 0.038333, ΔL6 = 0.041667
The average ΔL = (0.153333/6) = 0.025556
And while we take the uncertainty with only one significant figure: the average becomes ΔL = 0.03
And while the value L should have the same magnitude as the uncertainty (two numbers after the floating point in our example) so: the average L = 5.15
So now the final result is: L = av(L) ± av(ΔL) = 5.15 ± 0.03
But they told us in the same lesson that the uncertainty in case of direct measurement is half of the smallest increment of the instrument scale
And then we had a homework that asks to calculate the uncertainty in case of direct measurement for many times (We used an ohmmeter to measure the same resistance in a circuit 6 times with different voltage every time) and I used the first way to calculate the uncertainty but the teacher said that it was wrong and that the uncertainty is the smallest increment of the instrument scale. So I told him "why is that wrong" but he couldn't answer and said to me "forget about it and just use this way"

- Miriam (age 19)

Damascus, Syria

- Miriam (age 19)

Damascus, Syria

A:

That's a very nice question. Everything you say through your "final result" makes sense. People usually don't use the average absolute value of the error but rather the square root of the average of the squared error, called the standard deviation, but for many purposes your choice is just fine.

The use of half the smallest increment is a decent approximation in certain cases, if some conditions hold:

1. You can't read the scale to better than the smallest marked increment. E.g. maybe it's a digital readout. Otherwise the error could be *smaller* than half the increment.

2. There are no other sources of noise. Otherwise the error could be much *larger *than half the increment.

The second condition usually *doesn't* apply. In your case, for example, the fluctuations between measurements were obviously larger than that.

So far as I can tell, you were right. The teacher was just telling you to follow orders and not think about what anything meant. We see that sort of teaching all too often at all levels.

Mike W.

*(published on 03/22/2014)*