As someone trained in a real science one of the things that bugs me about the so-called science of economics is that econognomes never cite figures with a margin of error. All measurements have a level of accuracy and precision and a margin of error. If we measure distance then the margin of error is half the smallest unit on our ruler.
Recently two large organisations published growth forecasts for the UK economy. The IMF and Earnst & Young both released figures. Their forecasts for 2013 are 1.4% & 1.6% respectively. Of course the two organisations use different models (though probably both use Neo-classical models) and this means we expect slight differences. But this should not make us any more confident, because we don't know how confident the measure is.
Accuracy in this case will have to be determined in retrospect. If the figure cited is close to actual figure when it is announced then we'd say it is more or less accurate. It would be interesting to see how accurate these kinds of predictions have been over time. This article on the IMF website by Paula Masi suggests that the models are OK for stable conditions, but don't predict changes very well. Which is about the best we could expect for Neo-classical models. Note that in this article IMF forecasts deviated from the real world by ± 1% on average and had to be revised frequently to take account of changing economic policy. This figure of ± 1% seemed to apply quite broadly to other forecasters as well.
The precision of the figure is 1 decimal point in this case. This means that the figure is supposed to be precise to 0.1%. However in a situation where the likely inaccuracy will be ± 1% the extra decimal place is meaningless.
The margin of error is the error inherent in the measurement. For instance if the measures is 1.4% ± 1% then this is an extremely unreliable figure because it could be anything from -0.4% to 2.4%. If the margin of error is ± 0.1% then the measure is expected to vary from 1.3% to 1.5%. And note that at this level of error the two predictions quoted above overlap, and so we don't treat the difference as very significant. They could both be 1.5% for instance. By comparing the accuracy of forecasts over time we can say that the average error in the prediction is ± 1% and take this for the real margin of error.
In statistical measures the error rate is a factor of the sample size, it's easy to work out. If 5% of the population say they'll vote Green we also know what the expected error is from the sample size. Similarly when comparing two sets of figures statisticians cite the likelihood of a correlation between them. When CERN announced that they had 'found' a new particle, what they actually said was they that had a confidence level of 99.9997% that the CMS detector had found a new boson at 125.3 ± 0.6 GeV/c2 within 4.9 σ or a bit over 99.9999% confidence that it wasn't a fluke. Note that it could still be a fluke!
Without this information a figure may as well be plucked out of the air and of course with economic forecasting we suspect that this is exactly what they do! I've complained to the media about the way they cite such figures, and even had journalists agree with me. Citing figures without uncertainty creates a false sense of certainty. But these economic measures are far from certain.At present the margin of error is the same order or magnitude as the measurement. Forecasters, then, could be almost 100% wrong in either direction!
I think it's also pretty clear from all of this that the claim of economics to be a science is far from credible. In fact economic models do make predictions, but they are almost always wrong in ways that should invalidate the theory.