Trying to improve the rigour of systematic reviews

What started out as a response to a single dangerously biased review ended up as a set of common limitations with literature reviews, examples of what went wrong, and concrete advice on how to avoid these pitfalls.
Published in Ecology & Evolution
Like

Back in January 2019, an 'early view' research article was published in the journal Biological Conservation. The authors of this article claimed to have reviewed the evidence on insect population trends and identified "dramatic rates of decline", with "over 40% of insect species... threatened with extinction". A shocking finding, indeed.

But what was also worrying to me, was that the authors had used some very strange methods to synthesise the evidence. For example, they claimed to have searched just one bibliographic database ("Web of Science"), but rigorous reviews should search many different sources of evidence, since no one database includes all research. More worryingly, perhaps, the authors had used a very limited set of search terms, which would certainly have missed vast swathes of research, AND biased the findings towards research that showed declines (despite claiming to have looked at all population trends, including those without change or increasing).

I knew that this review would gain a lot of attention (as indeed it did; according to Elsevier, it has >80,000 shares likes and comments on social media [21.10.20]). I was also worried that its findings were almost certainly not reliable because of the many biases and limitations present in the methods. So, I assembled a team of collaborators; experts in evidence synthesis, entomology, and ecological meta-analysis. We started out drafting a response to the journal detailed what had gone wrong. But we soon realised that, between the 9 of us, we had collected a large number of examples of biased and limited literature reviews in the field of ecology and environmental sciences. 

Instead of focusing on one review, we decided to go more broad in scope and highlight common major flaws across literature reviews that could be mitigated against. So, we expanded our scope, produced a mini database of reviews, and fine-tuned our short-list of major flaws.

Our peer-review experience in NatureEcoEvo was instrumental in clarifying our ideas and even in adding one major flaw. For that, we are indebted to our incredible peer-reviewers. Our final manuscript is one that we hope will appeal to anyone thinking about conducting a literature review that involves a full synthesis of included studies. Our advice ranges from the quick and easy to the more in-depth and resource intensive, but we believe there is something in it for everyone.

All of my co-authors agree that not every review needs to be a gold standard systematic review. However, the methods should match the aims. Where there is an aim to be comprehensive, representative, precise, accurate, critical, definitive, or high profile, review authors have a responsibility to select the most appropriate methods for synthesising evidence: in this way, review methods are no different from primary research methods. They must be fit-for-purpose.

Our manuscript represents not only a substantial writing effort, it also condenses reams of articles, books and training courses and decades of evidence synthesis experience into a few pages of concrete advice. There is always more to learn, and we encourage the avid reader to delve into evidence synthesis methods across disciplines and sectors (there is SO much out there). 

As people are becoming more aware of the added value of a 'systematic' style review, we hope that the research community keeps apace and knows when a review has used rigorous methods and when hasn't. We remain hopeful...

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in