Ok, now say you do your test at 30degC. (underscores separate thousands). Assume for a minute, your mic is made of the same steel as the gage blocks.
Your 1.000_000 gage block now actually is about 1.000_120 (+/- about 0.000_010 depending on alloy). But your micrometer, being made of steel, may also have stretched to 1.000_120 when it reads 1.000_0. So, your block measures 1.000_0. Which is sometimes a blessing and sometimes a curse. If you wanted to know what the steel's nominal dimensions where at 20degC, you got the right answer. You ship the part to the customer who measures at 10degC and the same thing happens in reverse. If you needed to know what size it is right now, you got the wrong answer. If you measure a 1.000_000 piece of zerodur, your micrometer read 0.999_9 which is neither the correct dimension now or at 20degC. If you measured a piece of aluminum, you wouldn't get the correct answer now or at 20degC. If you measured stainless, it expands at a higher rate than other steels. If the mic is made of stainless, it also would so regular steels are off. Body temperature on a piece of steel could throw it off by 2 tenths, if you hold onto it long enough, and twice that for aluminum. Which is why mics frequently have two little pieces of plastic insulation where you are supposed to hold them. And these numbers get bigger on a long piece.
So, it is easy to fool yourself that your measurements are more accurate than they really are for this reason, in addition to those stated in other posts. Like confirmation bias. If you take three measurements and get 0.999_8, 0.999_9, and 1.000_0. Well, must be 1.000_0 and I wasn't holding it right the first two times. Even when you aren't trying to cheat, it is still easy to cheat. Particularly when what you are measuring looks spiffy. I like the bogus gage block trick.
All metrology 101.
How does hardness (heat treating) of an alloy affect the coefficient of thermal expansion? I got the impression from a badly written material datasheet that it could have a pretty substantial effect.
And since it is often hard to measure under controlled temperature conditions, this matters - one may need to calculate the errors or a correction. Particularly since many of the objects we measure and measurements themselves are often hardened.
Now, when I was doing work at the 0.5nm (~0.02microinches) resolution level, we needed to let a thick piece of glass stabilize for at least 24 hours or we could see the effect of it not being in thermal equilibrium (actually, we could still see it, it just wasn't as ginormous). I had to build a temperature stabilizer for that one (which is rough when putting a glass cover over it messes things up optically), plus we calibrated a lot. It operated in an environment (telescope dome) where temperature varied a lot. Plus we had cyrogens. Another fun one was measuring wind speeds on another planet, when both planets are rotating and hurtling through space. Even, there, people tended to overlook the impact of temperature.
On one occasion, we had a piece of equipment failing intermittently. I am the only one who noticed that the failures were correlated with temperature. Took some convincing. Took even more convincing to get permission to cook the million dollar instrument (not ours) with a heat gun to keep it working. Did the job. Turned out a wire had broken (one half of a differential pair) and I was shifting the logic thresholds just enough for it to work. Experiences like this, though, mean I like to design temperature sensors into things and record temperature.