Is full symptom assessment complete if only some of the items have been assessed?
Clinicians often trust that the guidelines that underpin their practice are rigorous and dependable, but recently, systematic reviews – the pinnacle of the evidence hierarchy and most trustworthy source of evidence for clinical guidelines- have come under fire for problems with rigor and bias (Demasi 2018).
Reporting guidelines have been developed to improve review quality and rigor and numerous methodologies for the conduct and reporting of different kinds of evidence synthesis have been developed, including for clinical guidelines (Guyatt et al. 2008).
The quality of systematic reviews themselves can easily be assessed using standardised tools such as AMSTAR 2 (Shea et al. 2017) or by comparing what the authors have reported with their stated methodology.
Many of these approaches highlight the importance of transparency, detail, and accuracy to ensure rigor and dependability.
While often challenging, undertaking a systematic review is essentially a process of following specific, step by step instructions and clearly documenting how you have followed them or where and why you have done differently.
The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) is widely regarded to be the ‘gold standard’ for reporting traditional systematic reviews (Moher et al. 2009). Some journals endorse the use of PRISMA as a prerequisite for publication, and the growing adoption of PRISMA has been linked with higher methodological quality and reporting (Panic et al. 2013). However, not all reviews citing PRISMA appear to have followed its recommendations.
A review on a current topic – mandated nurse staffing ratios in acute care – is a very recent example of how reporting against guidelines can be a challenge (Olley et al. 2018). This review, omits key elements defining PRISMA adherence including; citation of a protocol, detailed inclusion criteria, at least one search strategy, and an assessment of the quality of the included sources of evidence. Combined, these issues could limit the confidence a reader may have in the findings. To briefly explain why these issues are important; a protocol minimises bias and is considered integral to true systematic reviews, detailed inclusion criteria also limit bias and further enhance a reader’s ability to understand the scope of the review, a reproducible search strategy allows authentication of the review’s process, and an assessment of quality enables appraisal of bias and the relative veracity of the findings and conclusions (Shamseer et al. 2016). Without knowing that the review was conducted according to the instructions or accounted for the quality of the included evidence, trustworthy recommendations and conclusions are difficult to make.
Non-adherence to reporting standards is not rare. In fact, a field of meta-research has examined the issue in depth (Page and Moher 2017). It may be that restrictive word-limits hamper some authors’ best efforts to explain their process in detail but use of supplementary data and separately published protocols can assist by linking to important information elsewhere. To justify their place at the top of the pile for informing evidence-based care, systematic reviews must uphold certain standards (Campbell 2017). This can be most easily done by authors understanding and transparently reporting against respected guidelines. Readers and users of systematic reviews must also ensure they are aware of reporting guidelines and can themselves critically appraise what they’re reading by not taking even ‘level one’ evidence at face value.