Systemic reviews: If in doubt, refer to the instructions

By Dr Micah D J Peters PhD|
February 15th, 2019|

Accessibility – Increase Font

Share This Story

Print This Story

Is full symptom assessment complete if only some of the items have been assessed?

Clinicians often trust that the guidelines that underpin their practice are rigorous and dependable, but recently, systematic reviews – the pinnacle of the evidence hierarchy and most trustworthy source of evidence for clinical guidelines- have come under fire for problems with rigor and bias (Demasi 2018).

Reporting guidelines have been developed to improve review quality and rigor and numerous methodologies for the conduct and reporting of different kinds of evidence synthesis have been developed, including for clinical guidelines (Guyatt et al. 2008).

The quality of systematic reviews themselves can easily be assessed using standardised tools such as AMSTAR 2 (Shea et al. 2017) or by comparing what the authors have reported with their stated methodology.

Many of these approaches highlight the importance of transparency, detail, and accuracy to ensure rigor and dependability.

While often challenging, undertaking a systematic review is essentially a process of following specific, step by step instructions and clearly documenting how you have followed them or where and why you have done differently.

The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) is widely regarded to be the ‘gold standard’ for reporting traditional systematic reviews (Moher et al. 2009). Some journals endorse the use of PRISMA as a prerequisite for publication, and the growing adoption of PRISMA has been linked with higher methodological quality and reporting (Panic et al. 2013). However, not all reviews citing PRISMA appear to have followed its recommendations.

A review on a current topic – mandated nurse staffing ratios in acute care – is a very recent example of how reporting against guidelines can be a challenge (Olley et al. 2018). This review, omits key elements defining PRISMA adherence including; citation of a protocol, detailed inclusion criteria, at least one search strategy, and an assessment of the quality of the included sources of evidence. Combined, these issues could limit the confidence a reader may have in the findings. To briefly explain why these issues are important; a protocol minimises bias and is considered integral to true systematic reviews, detailed inclusion criteria also limit bias and further enhance a reader’s ability to understand the scope of the review, a reproducible search strategy allows authentication of the review’s process, and an assessment of quality enables appraisal of bias and the relative veracity of the findings and conclusions (Shamseer et al. 2016). Without knowing that the review was conducted according to the instructions or accounted for the quality of the included evidence, trustworthy recommendations and conclusions are difficult to make.

Non-adherence to reporting standards is not rare. In fact, a field of meta-research has examined the issue in depth (Page and Moher 2017). It may be that restrictive word-limits hamper some authors’ best efforts to explain their process in detail but use of supplementary data and separately published protocols can assist by linking to important information elsewhere. To justify their place at the top of the pile for informing evidence-based care, systematic reviews must uphold certain standards (Campbell 2017). This can be most easily done by authors understanding and transparently reporting against respected guidelines. Readers and users of systematic reviews must also ensure they are aware of reporting guidelines and can themselves critically appraise what they’re reading by not taking even ‘level one’ evidence at face value.

Campbell, J.M. Kavanagh, S., Kurmis, R., and Munn, Z. 2017. Systematic reviews in burns care: poor quality and getting worse. J Burn Care Res; 38(2):e552-67.
Demasi, M., 2018. Cochrane – a sinking ship? [Blog] BMJ EBM Spotlight. Available:
Guyatt, G.H., Oxman, A.D., Vist, G.E., Kunz, R., Falck-Ytter, Y., Alonso-Coello, P. and Schünemann, H.J. 2008. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ; 336 (7650):924-6.
Moher, D., Liberati, A., Tetzlaff, J. and Altman, D.G. 2009. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann intern med; 151 (4):264-269.
Olley, R., Edwards, I., Avery, M. and Cooper, H. 2018. Systematic review of the evidence related to mandated nurse staffing ratios in acute hospitals. Aust Health Rev. Apr 12. doi: 10.1071/AH16252. [Epub ahead of print].
Page, M.J., Moher, D. 2017. Evaluations of the uptake and impact of the Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) Statement and extensions: a scoping review. Systematic reviews, 6(1),p.g. 263.
Panic, N., Leoncini, E., De Belvis, G., Ricciardi, W. and Boccia, S. 2013. Evaluation of the endorsement of the preferred reporting items for systematic reviews and metaanalysis (PRISMA) statement on the quality of published systematic review and meta-analyses. PloS one, 8(12), p.e83138.
Shamseer, L., Moher, D., Clarke, M., Ghersi, D., Liberati, A., Petticrew, M., Shekelle, P. and Stewart, L.A. 2015. Preferred reporting items for systematic review and metaanalysis protocols (PRISMA-P) 2015: elaboration and explanation. BMJ, 349, p.g7647.
Shea, B.J. Reeves B.C, Wells, G., Thuku, M., Hamel, C., Moran, J., Moher, D., Tugwell, P., Welch, V., Kristjansson, E., and Henry, D.A. 2017. AMSTAR 2: a critical appraisal tool for systematic reviews that include randomised or nonrandomised studies of healthcare interventions, or both. BMJ online 358 J4008
Dr Micah D J Peters is the ANMF Federal Office National Policy Research Adviser based in the Rosemary Bryant AO Research Centre, School of Nursing and Midwifery, University of South Australia

Leave A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.