Methodological quality
The extent to which the design and conduct of a trial are likely to have prevented systematic errors (bias). Variation in quality can explain variation in the results of trials included in a systematic review. Rigorously designed (better quality) trials are more likely to yield results that are closer to the “truth” (i.e., unbiased).
The degree to which the design and conduct of a study match the study objectives.
Was the spectrum of patients representative of the patients who will receive the test in practice
Is the reference standard likely to correctly classify the target condition?
Is the time period between the reference standard and index test short enough to be reasonably sure that the target condition did not change between the two tests?
Did patients receive the same reference standard regardless of the index test result?
Did the whole sample or a random selection of the sample, receive verification using a reference standard of diagnosis?
Was the reference standard independent of the index test (i.e. the index test did not form part of the reference standard)?
Were the index test results interpreted without knowledge of the results of the reference standard?
Were the reference standard results interpreted without knowledge of the results of the index test?
Were uninterpretable/ intermediate test results reported?
Were the same clinical data available when test results were interpreted as would be available when the test is used in practice?
Tools
Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)
A Measurement Tool to Assess systematic Reviews (AMSTAR 2)
Risk Of Bias in Systematic reviews (ROBIS).
Papers
The first essential question to ask about the methods section of a published paper is: was the study original?
The second is: whom is the study about? Thirdly, was the design of the study sensible? Fourthly, was systematic bias avoided or minimised?
Finally, was the study large enough, and continued for long enough, to make the results credible? 1).