In the ongoing debates about anthropogenic global warming, one of the recurring issues is that the scientists who analyzed the proxy temperature data (tree rings, dissolved gasses in glacier ice, etc.) did so using software to “correct” the data – but they've never disclosed the software itself, so nobody can either verify or replicate their results. Furthermore, different datasets cannot be independently subjected to the same “corrections”.
As yet, there is no norm or convention in most areas of science that requires scientists to release this code. In the case of climatology, when some code has been disclosed (generally involuntarily), one can infer a reason why the scientists held the code close to their chests: it's embarrassingly bad, and full of very evident errors.
This situation needs to change. Scientific norms and conventions need to catch up with the computer age. Scientific American has weighed in with a similar opinion. I canceled my subscription a few years ago when I got fed up with their fawning AGW coverage. I'm amused (but not surprised) to that in this article they cover the problems caused by undisclosed computer software – but manage to miss mentioning one of the most obvious problem areas, in climatology...
In the comments following the article I think one person hits it:
ReplyDelete"Knowing the algorithm and having access to the data should be sufficient for replication, having the original code is not necessary.
In fact, being able to reproduce the algorithm yourself, i.e. writing your own code, is a better test than merely inspecting the original code."
The problem with what has come out from the climate people is not so much that the code is crap, it is but that isn't the point, it is that as you've posted a few times, the results are manipulated and there are a lot of hidden factors involved. There is no real algorithm to attempt to reproduce.