Sample size calculation

https://www.surveysystem.com/sscalc.htm

Once the eligibility criteria for patients or a sample of the population has been described, it is necessary to examine whether the sample size of the study has been calculated.

Sample size calculation is usually a requirement for ethics committee approval of a prospective study. The studies that perform sample size calculation provide a clear indication of methodological quality, diligence, reasoning, and an increase of study power. The reasons for this are the need to recruit enough patients into the study to detect the anticipated effect, to minimize the chance of the study results being erroneous, and to use resources efficiently because every patient recruited into the study costs money. There is no guarantee that the statistically significant findings really represent true results in a prospective study without a sample size calculation.

The conclusion can be: 1) there is no effect, or 2) the study was too small to detect a difference. The reviewer needs to verify whether the manuscript describes the parameters used to calculate the sample size.

The parameters are: 1) level of statistical significance, usually stated as 5%; 2) the Statistical power, usually set as 20%; and 3) clinical parameters.

The size of the clinical effect under investigation has an inverse relation with the sample size of the study: the larger the clinical effect, the greater are the chances of showing the difference in the investigation in a small sample size. Conversely, small clinical effects preclude the study of larger sample sizes.

A Type-I error (a error) represents the “level of statistical significance of study.” This happens when an apparently positive result may, in fact, be negative (false positive) and the null hypothesis is erroneously rejected. Although apparently effective in a clinical trial, a treatment may not be effective.

A Type-I error is typically set as a study significance level of 5% (a = 0.05), the risk of 1 in 20 to see a false-positive result. A Type-II error (b error) represents the “power of study.” Although there is a real difference (= effect) between the groups, statistical testing may indicate no significant results. There is always a risk of missing a positive treatment effect (false negative). A Type-II error is typically set as a study power of 10%–20% (1 - b = 1 - 0.20 = 0.80 [80%]). For example, if you have 80% power in your sample size calculation, it means that if there is an effect on your study, the probability of not detecting it is as low as 20%. From a practical standpoint, the reviewer can follow the 50/50 rule (proportions): 50 events are needed in the control group for an 80% chance of finding a 50% reduction.7 For example, in a particular surgical spine technique there is a 12% rate of infection (12/100 or 24/200 or 48/400 or 50/417). The authors want to show an infection reduction of 50% with local use of topical vancomycin. The total sample size calculation for the control (n = 417) and treatment (n = 417) groups will be 834 patients. The questions that need to be answered by the reviewers are: 1) did the authors perform a sample size calculation; 2) do the authors mention the power of their study; 3) what level of significance was calculated (in %); and 4) are the clinical parameters used to calculate clinical significance? 1).


1)
Falavigna A, Blauth M, Kates SL. Critical review of a scientific manuscript: a practical guide for reviewers. J Neurosurg. 2018 Jan;128(1):312-321. doi: 10.3171/2017.5.JNS17809. Epub 2017 Oct 20. PubMed PMID: 29053077.
  • sample_size_calculation.txt
  • Last modified: 2024/06/07 02:58
  • by 127.0.0.1