Friday, 11 September 2020

'Scientific Findings' and Reliability

It is simplistic, but I suppose we could use a traffic light scheme for classifying how much faith one should place in particular scientific claims. RED The least reliable could be appended to items, appearing essentially unpublished online or 'published' in obscure jourals or simply cited in newspaper articles. The scientific credentials of the individuals generating the work might also be suspect (I appreciate that people have to start somewhere and, occassionally, the workers making the observation just happen to be in the right place at the right time). Items based only on preliminary studies (or even casual observations), with a small sample size or with evidence of a weak design (e.g. no control group) are also very debatable. Clear (or even worse, hidden) evidence of a commercial or political funding of the data collection should also give pause for thought. There can be major changes in science but, generally, findings that seem at variance with the scientific status quo need to be looked at carefully. AMBER Intermediate reliability could be applied to findings in which many of the above weaknesses have been solved (but not eliminated). Confidence can also be stronger, when attempts have been made (by the original source or, better, other scientists) to 'strengthen' a preliminary study, by finding additional support. I may be being a bit pedantic but I would place most modelling studies in this category. They do provide valuable guidance but their outcome can always be changed dramatically by the choice of variables and/or the values appended to them. Models should (and often are) ran many times, with different inputs but they deal with probabilities (as does all science) rather than definites. The only thing that 'confirms' them is when they match the actual outcome. GREEN The strongest category would be reserved only for well-conducted and professionally analysed, substantial studies, coming from reputable sources, which have been independently replicated (and clearly fit into an established scientific framework). They should also be published in reputable scientific journals, having gone through an appropriate editorial process (rushing the process, in an emergency, can be understandable but it may generate more errors than doing it more slowly). Well-conducted meta-analyses (where data from several sources is statistically combined) can also instill more confidence (it increases the sample size and indicates that a variety of sources are coming to the same conclusion). I appreciate that most people are too busy to worry about the provinance of the claims that they read about but I think it's helpful to remind folk that not all science is equal.

No comments:

Food For Thought?

The link between global heating and food prices is clearly illustrated in a recent CarbonBrief ( https://www.carbonbrief.org/five-charts-ho...