Naked Science Forum

General Science => General Science => Topic started by: chiralSPO on 06/02/2018 18:29:30

Title: Would registered reports be a good way to disseminate STEM research?
Post by: chiralSPO on 06/02/2018 18:29:30
One of the major problems with peer-reviewed research is that reviewers and editors are often more interested in how compelling the 'story' of the paper is than how carefully designed and executed the study itself was. (Are the results presented in a way that unambiguously proves something that was unexpected? great, publish in Nature! You showed that the actual system of study is very complex and that multiple explanations might be equally valid, but you yourself didn't have the means or time to experimentally distinguish them? Let's send that to Acta Apathetica...)

This not only deprives the field of important null results or confirmations of already-widely-accepted theories, but also has had a profound (and profoundly negative) effect on the approach to research. Instead of generating and testing hypotheses, many researchers resort to collecting as much information as possible, and then seeing if they can coax any trends or correlations out. (This isn't always a terrible research strategy, but it has major implications on how the results should be treated.)  Given a large enough pool of data, the chances of there not being any spurious correlations falls to zero, so unscrupulous researchers can present apparently bullet-proof statistical analyses that have no predictive merit, and are unlikely to be supported by attempts to replicate the study.

The alternative of "registered reports" involves review of the experimental method (is the research question interesting and well-addressed by the methods proposed?). Reviewers can offer recommendations before the experiment is done, and then once everyone is happy, the journal guarantees publication of the results and analysis whatever they may be.

I don't know how well this could be applied to chemistry (my field), where we have such limited understanding compared to how complex the world of chemistry is--which means there are a lot of people shooting blindly, or doing "high throughput assays." But I do think that there are many types of experiments within my field that this would work well for.

I just saw this article about people trying this for social sciences (business and finance): http://news.cornell.edu/stories/2018/01/business-professors-aim-revamp-academic-tradition

It has also been tried in the life sciences: https://www.elsevier.com/reviewers-update/story/innovation-in-publishing/registered-reports-a-step-change-in-scientific-publishing

What do you think?
Title: Re: Would registered reports be a good way to disseminate STEM research?
Post by: chris on 07/02/2018 22:51:04
Very good, well written points you make.

One of my friends, an eminent geologist at Cambridge, once spoke with much scorn of "Nature" and "Science" - "tabloid journals", he said. "If you're publishing anything decent you send it to the Journal of Geology!"
Title: Re: Would registered reports be a good way to disseminate STEM research?
Post by: alancalverd on 07/02/2018 23:46:18
Given a large enough pool of data, the chances of there not being any spurious correlations falls to zero, so unscrupulous researchers can present apparently bullet-proof statistical analyses that have no predictive merit, and are unlikely to be supported by attempts to replicate the study.
My objection to most of climate "science", nearly all of economics and psychology, and the totality of theology. 1. Select your data 2. Look for a correlation 3. Publish. 4. Under no circumstances test for proof of causation. If you can select from a large enough pool of data, you will find some that fits your hypothesis in time to renew your grant application or get on to the Honours List.

I'm not much worried by the lack of "confirmations of accepted theories". You only need to attend a conference on radiation protection in diagnostic radiology to be told by 20 different speakers that photons travel in straight lines. Problem is that if you don't attend said conferences, you lose your Continuing Professional Development certification!

Null results is a contentious area in pharmaceutical research. Having spent a king's ransom on developing patent Molecule B, I don't want the world to know that it is no more effective than generic Molecule A for curing warts or dropsy: a null result can damage your share value without actually killing anyone. But there is a lot of pressure to publish null results of clinical trials, so we present it as " no worse than A, and in a prettier package".

Sadly, a lot of crap procedures turn up on the desks of clinical research ethics committees. A wise of chairman of mine always insisted that if the science was flawed, it would be unethical to do any experiments at all, but we are continually admonished to look at the ethics only: never mind the 5000 volt electrodes, have the proposers unreasonably excluded lefthanded lesbians or insulted the undead? My test is very simple: if the proposers can't spot the difference between principal and principle, the proposal is rejected. This has resulted in about 90% rejection of proposals from academic institutions, where  principal lecturers seem unable to read the signs on their own office doors. Trivial? Not in the world where hypo- and hyper- means life and death, or milli, μicro and Mega will all pass a spell check.