What's new
What's new

Statistics and metrology standards

jCandlish

Titanium
Joined
Jun 1, 2011
Location
Oberaargau, Swizerland
{Moderator note... This entire off-topic exchange was separated from another thread. That's why the quote below makes no sense.}

To be assured of the accuracy of the measurement of any dimension, the instrument should have a rated accuracy ten times better than the tolerance being measured.
'
When you get into very tight tolerances, this becomes impractical. The technology simply does not exist at any reasonable cost.

Please understand that resolution and accuracy are not the same thing. A device with .00005" resolution will typically have a rated accuracy twice that or larger.

- Leigh

And that is why Tolerance Classes scale by factors of 10! :crazy:

The factor of 10 thing is just wrong. Nyquist says >2, and he gives a rigorus proof.

And that is why DIN ISO 3650 scales gauge block tolerance classes by factors of 2.

Regards
<jbc>
 
Last edited by a moderator:
Really?

Then have a look a DIN EN ISO 3650, Tolerance Classes for Gauge Blocks, and notice that the finesse is aproximately doubling, and note the reason therefore. Then study Nyquist.

toleranz.jpeg
 
The calibration factor has absolutely nothing to do with tolerance classes.

It's the accuracy of the measuring instrument compared with the accuracy of the parameter being measured.

For example, if you want to certify a micrometer to .001" accuracy, your standard must be .0001" or tighter.

- Leigh
 
I should note that my background is as a Mathematician and not a Meterologist. The fudge factor is 5X.
Serious question. How many measurement samples does the certification agency get to take (must take), to determine that they have made valid measurments?
It's metrologist, not meterologist.

The number of samples depends on the desired level of accuracy. For the 10x factor you only need one, but typically take three.

As tolerances decrease and higher accuracy is required, more samples are taken.

With precision measuring equipment in an appropriate controlled environment, the deviation of a set of readings is quite small.
For really accurate measurements, the device being calibrated is left in the calibration room to achieve thermal equilibrium, for periods up to seven days.

You can find a lot of information, including calibration procedures, in the calibration section of the NIST site http://www.nist.gov/calibrations/index.cfm

- Leigh
 
It's metrologist, not meterologist.
I come here also to practice my English

The number of samples depends on the desired level of accuracy. For the 10x factor you only need one, but typically take three.

As tolerances decrease and higher accuracy is required, more samples are taken.

With precision measuring equipment in an appropriate controlled environment, the deviation of a set of readings is quite small.
For really accurate measurements, the device being calibrated is left in the calibration room to achieve thermal equilibrium, for periods up to seven days.

You can find a lot of information, including calibration procedures, at the NIST site National Institute of Standards and Technology.

- Leigh

I don't suppose that you could determine the accuracy of your measurment simply my taking enough samples, without comparing it to a calibrated standard? ;)
 
I come here also to practice my English
Don't feel bad. Most Americans have never heard of metrology, which is the study of measurement and calibration.
They think the word is meteorology, which is the study of weather. :D

I don't suppose that you could determine the accuracy of your measurment simply my taking enough samples, without comparing it to a calibrated standard? ;)
The samples I mentioned are measurements of the standards themselves.

For example, a common standard for calibrating micrometers is a gage block. Given that gage blocks are much more accurate than micrometers,
you can certify that a micrometer is accurate to ±0.001" by taking a single measurement of a 1" gage block that's accurate to ±4 millionths of an inch (0.000004).

There is uncertainty in any measurement, but if that uncertainty is much smaller than the tolerance band being certified, it's not significant.

- Leigh
 
Sorry, but I'm not interested in publications or Wikipedia.

My comments are based on standard procedures as used by our national standards body (NIST) and the calibration industry.

In the US, calibration of a measuring instrument is said to be "Traceable to NIST" when the practices and standards used conform to those procedures.

Any other source, standard, or practice is irrelevant.

- Leigh
 
Sorry, but I'm not interested in publications or Wikipedia.

My comments are based on standard procedures as used by our national standards body (NIST) and the calibration industry.

In the US, calibration of a measuring instrument is said to be "Traceable to NIST" when the practices and standards used conform to those procedures.

Any other source, standard, or practice is irrelevant.

- Leigh

And yet you use the word 'uncertainty' indiscriminately as if you understand what it means.

Practically, if you want to measure a thing, the methodology (not just the instrumentation) should resolve >2x the thing to be measured. That's a law of Nature, not a law of Man.
 
And yet you use the word 'uncertainty' indiscriminately as if you understand what it means.
The word 'uncertainty' is defined within the SOP of all standards organizations in the world.

Look it up at the NIST site that i referenced earlier.

You're making critical comments about a discipline with which you have no familiarity. That seems unjustified.

- Leigh
 
You're making critical comments about a discipline with which you have no familiarity. That seems unjustified.

I have some long years experience in Statistics and so feel that my 'opinion' has a little weight. These things refered to as 'Standards' are actually 'Random Samples'. Believe me I do have some familiarity here.

As far as what is 'legislated by the NIST', I am ingnorant.

I do not want to ruffle any Feathers.

I just want to make the obvious, helpful and truthful claim that:

Practically, if you want to measure a thing, the methodology (not just the instrumentation) should resolve >2x the thing to be measured. That's a law of Nature, not a law of Man.

If the NIST is using saying 10X, then that is most likely to take into account 'human factors'.
 
This is a pointless discussion.

The procedures and methodology for standardization and the transference of measurements through a chain of standards
has been well-established internationally going back at least to the middle of the 19th Century.

I'm sure the statistical processes underlying those standards are quite accurate.

As far as the term 'uncertainties', please note its use in the second sentence of the NIST Policy on Traceability, here: http://www.nist.gov/traceability/index.cfm
Quoting therefrom:
"NIST is also responsible for assessing the measurement uncertainties associated with the values assigned to these measurement standards."
(emphasis mine)

Uncertainties are identifiable error sources, such as temperature, humidity, etc., which make it impossible to know the exact value of a measured parameter.


- Leigh
 
I just realized that the comment previously posted here makes no sense since I split the threads, so I removed it.

- Leigh
 
Last edited:
While the idea of needing a standard that's 10X more accurate than the instrument being calibrated is appealing (perhaps because the base of our number system is 10), it would appear that 4X is all that's required under most (some?) conditions. This 10X & 4X is what's commonly referred to as the Test Uncertainty Ratio (TUR). The TUR concept is discussed at length in "Calibration: Philosophy In Practice" by the John Fluke Corp. and a little more understandably here: http://www.transcat.com/PDF/TUR.pdf and here; http://metrologyforum.tm.agilent.com/uncert.shtml . Needing a TUR of 10:1 (10X) is almost intuitive. Only needing less - not so much! :crazy:

Best Regards
 
The 10x factor is used because when you can demonstrate that ratio, all of the potential error sources become insignificant,
and you can certify a given measurement to be within the stated tolerance without evaluating all the individual uncertainties.

This factor is used throughout the calibration industry, not only for mechanical measurements, but for calibration of all types
of equipment used to measure physical parameters.

As I said earlier, instruments which have a very high accuracy rating (very narrow tolerance band), where a 10x factor cannot
be achieved economically or practically, other more involved meethods are required.

- Leigh
 
For example, if you want to certify a micrometer to .001" accuracy, your standard must be .0001" or tighter.
I've been tempted to enter this discussion because what you wrote here is wrong. However, you have subsequently backed off somewhat from it:
The 10x factor is used because when you can demonstrate that ratio, all of the potential error sources become insignificant,
and you can certify a given measurement to be within the stated tolerance without evaluating all the individual uncertainties.
Even this version, although implying that the 10x factor is a matter of convenience, not requirement, is not correct. For an instrument calibration to be traceable to NIST all the uncertainties in all the comparisons that lead back to a particular NIST standard (e.g. a gauge block sitting in an environmental chamber at NIST headquarters) must be evaluated and given. Nowhere does NIST require the standard used be 10x better than the certification of the instrument. However, the more "overspecified" the standard gauge block, the sloppier can be the practice in using it, so I can understand why industry may use 10x for convenience.

But, again, a 0.0001" standard is neither required for certification of a 0.001" micrometer, nor is it sufficient in order to certify the calibration as traceable to NIST. A, say, 0.0005" standard gauge block whose own calibration is traceable to NIST through a documented chain of uncertainties that all have been evaluated, plus a calibration proceedure for the micrometer that itself has had its uncertainties evaluated (e.g. although equilibrated for 2 days at 68 oF there is still remaining uncertainty in the thermal expansion which, although it might be small, must be documented), would be perfectly sufficient for certifying a 0.001" micrometer.
 
I'm sorry, but the idea of needing a standard that is 10x more accurate than the tested device is complete and utter bollocks. Given that most devices are calibrated by a secondary lab, not a primary one, you'd be looking at only being able to certify devices of 1% of the primary lab's uncertainty. Since I have worked in my youth with devices that are comparable to UK NPL standards in their uncertainty, and calibrated them at the NPL, I don't buy needing to derate my calculated measurement uncertainties by a factor of 10 without a reason. The uncertainty of a device is based on the calculated uncertainties of the calibration and the calculated and measured variability of the device. Not an arbitrary factor of 10.

Regards
Mark

Oops. Opscimc types faster than me.
 








 
Back
Top