The Relationship between components of sample size Estimate and Sample Size

 

Mohanasundari S.K.1, Sonia M.2

1PhD Scholar in INC, College of Nursing, AIIMS, Jodhpur, Rajasthan.

2Faculty, College of Nursing, AIIMS, Jodhpur, Rajasthan.

*Corresponding Author Email: roshinikrishitha@gmail.com

 

ABSTRACT:

Sample size calculation is more complex and crucial area of attention in a research process. Appropriate sample size of the study act as a strong foundation for evidence based practice, as small sample size may fail to detect the effect or large sample size may waste the resources. As a researcher we have to ensure that needed sample size is estimated to generate desirable power from the study, so that the findings could be generalized to the population. But it is difficult unless the researcher is aware about the influence of each component of the sample size estimates on sample size. This article briefly reviewed the relationship between the components of the sample size estimates and sample size.

 

KEYWORDS: Sample size estimates and sample size, Power, Research design.

 

 


INTRODUCTION:

The number of individuals or observations included in a study is referred to as sample size. The letter ‘n’ is commonly used to indicate this number.1 The act of deciding how many observations or replicates should be included in a statistical sample is known as sample size estimation.2 One of the first considerations in conducting empirical study is estimating and justifying sample size. A sample size calculation's main goal is to figure out how many people are needed to detect a clinically relevant treatment effect because it serves as strong foundation for evidence-based practices.3 The sample size has an impact on precision of our estimations, and the study's power to draw conclusions.1 The sample size also has a significant impact on the hypothesis and study design, and there is no easy way to calculate the effective sample size for reaching a reliable result.4

 

A study with a small sample size may lack statistical power to detect significant effects, resulting in incorrect responses to crucial research issues. A study with an excessive sample size, on the other hand, wastes resources and may unnecessarily expose study participants to risk.5 As a result, it is critical to optimize the sample size to get a reliable result. It is a fundamental statistical theory that we use to estimate the sample size before beginning a clinical investigation in order to avoid bias in the interpretation of the data.2 Pre-study calculation of the required sample size is warranted in the majority of clinical trial and it is frequently influenced by the cost, duration, or practicality of gathering data, as well as the need for sufficient statistical power.6 Apart from this consideration, the sample size estimation for a clinical trial literally should be based on the components such as a. the effect size b. the standard deviation of the population (for continuous data); c. the targeted power to draw conclusion; and d. the significance level.7 It's also important to remember that different study designs necessitate varied sample sizes when estimating sample sizes. Even though many studies and textbook described about ‘how to estimate sample size’ for different studies unless, understating the relationship between components of sample size estimation it is difficult for the researcher to pick up the suitable sample size for study design. The objective of this review is to know the relationship between components of sample size estimation and sample size calculation for various studies.

 

Software’s available to estimate sample size:

Manually estimating sample size for a large population is nearly impractical, and it lacks confidence in generalizing the findings. However, software can assist in determining the test's power and sample size fit for the study, allowing for more confidence in the generalization of findings to a large population. Some software has been developed to aid researchers at all levels in calculating the appropriate sample size to use while studying any population in order to improve the precision of research.

1.     Researchclue Taro Yamane Sample Size Calculator. It is 100% accurate, fast and reliable software. It determines sample size from a given population.

2.     Power And Samples Size (Pass) Software: It's a one-of-a-kind piece of software that aids in the estimation of a study's power and sample size. This software is available in http://ncss.com/software/pass.

3.     The Roasoft Sample Calculator: This programme calculates both sample size and confidence intervals. The margin of error, confidence level, and response distribution are all taken into account by this software. It also provides a visual representation of what the margin of error might be with various sample sizes. This software is available in http://raosoft.com/samplesize.html

4.      The Survey System: Creative Research Systems survey software provides this Sample Size Calculator as a public service. It is necessary to understand the confidence interval and confidence level before using the sample size calculator. This software is available in https://www.surveysystem.com/sscalc.htm

5.     Power And Precision Software: This programme assists researchers in determining the power of a test or in making decisions. I.e. rejecting false null hypothesis. This software is available in http://power-analysis.com/8

6.     nMaster 2.0 Sample Size Software: has advantage over software that incorporates sample size calculation (STATA, EpiInfo, nQuery, etc.), in terms of contents, each of use and the cost. No other software incorporates all the situations that have been incorporated in the nMaster 2.0. The target audience for nMaster 2.0 is anyone (from residents, Ph.D. students to independent researcher) who is involved in doing quantitive research or consumer of quantitative research."9

 

7.     G*Power software: Which has a graphical user interface (GUI). The G*Power software is easy to use for calculating sample size and power for various statistical methods (F, t, χ2, Z, and exact tests). This software is available in www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3.10

 

Also available are specialized computer programs such as nQuery Advisor, and statistical packages such as SPSS, MINITAB, and SAS, which will run on a desktop computer and can be used both for sample size calculations and for performing statistical analysis of data.7

 

Relationship between components of sample size estimation and sample size:

When we conduct a statistical test, we are often comparing two hypotheses -null hypothesis and alternative hypothesis. Statistical tests look for evidence that we can reject the null hypothesis and conclude that intervention had an effect. Increasing sample size makes the hypothesis test more sensitive and more likely to reject the null hypothesis when it is, in fact, false. Thus, it increases the power of the test.

 

1.     The null hypothesis: It's a type of hypothesis that's considered 'true' unless it's proven false by experimental data. It claims that there is no genuine difference between a data sample and the population parameter, and that any discrepancy is attributable to sample error. For example, if we are measuring post-operative hospital stay of the children after music therapy, the null hypothesis is that, there will be no significant difference in the duration of post-operative hospital stay among children undergoing surgery between music group and control group.11

 

2.     The alternative hypothesis: It states that there is a genuine difference between a sample of data and the population parameter, and that the difference is due to the intervention's influence (cause).Using the example above, the alternative hypothesis is that, there will be significant difference in the duration of post-operative hospital stay among children undergoing surgery between music group and control group. The alternative hypothesis decides the type of test (one tailed and two tailed test) to be used.12

 

3.     One tailed test: If the alternative hypothesis is non-directional, then One-tailed tests are used and it allow for the possibility of an effect in one direction. E.g. the duration of post-operative hospital stay will be significantly less among children receiving music group therapy than children from control group. To produce the same effect with the same power, a one-tailed test necessitates a smaller sample size.

 

4.     Two tailed test: If the alternative hypothesis is directional, two-tailed tests are employed to look for the likelihood of a two-way effect (equally 50 percent of probability to get either positive or negative effect). E.g. there will be considerable difference in the duration of post-operative hospital stay among children undergoing surgery between music group and control group. In this scenario, there is a 50% chance that the duration of post-operative hospital stay in the music group will be less than in the control group, and a 50% chance that the duration of post-operative hospital stay in the music group will be higher than in the control group. As a result, for a two-tailed test to obtain the desired effect with the same power, a larger sample size is necessary. When we increase the sample size, the standard error lowers, or the difference between the sample statistic and the hypothesised parameter increases, the p value decreases, making it more likely that we reject the true null hypothesis, and therefore the risk of making a type-1 error diminishes.

 

5.     Type -1 error (α error or false positive): However, with every statistical test, there is always the risk of discovering a difference between a sample of data and the population parameter when none exists. This is referred to as a Type I error (rejection of a true null hypothesis). The Type I error diminishes as the sample size grows, and the confidence level (1-α) increases.

 

6.     Type-2 error (β error or false negative): Similarly, it's possible that we won't notice a difference between a data of sample and a population parameter even if one exists. A Type II error is the name given to this type of blunder (failure to reject a false null hypothesis). The likelihood of a Type II error falls as the sample size grows, but the maximum probability of a Type I error stays constant by definition. It's impossible to reduce both type 1 and type 2 errors at the same time. Type-1 error is normally fixed at the tolerance limit (usually 5%), and type-2 error is reduced by increasing the sample size.

 

7.     Level of confidence (1- α): In other words, a type I error corresponds to the level of confidence used in sample size calculations; it refers to the percentage of the time we expect the test result to be right. Because the margin of error and confidence interval are dependent on it, the level of confidence should be defined ahead of time in the analysis. Confidence levels of 90%/95%/99% are frequently used. 99% confidence level means we can be 99% certain that the test result to be correct. The confidence level is equivalent to 1 – the alpha level. So, if significance level is 0.05 (5%), the corresponding confidence level is 95%.If sample size increases then confidence level also will increase, hence chance of committing type-1 error will be minimized.13

 

8.     Level of significance (α): The test's "level of significance" is the complement of type-1 error. The maximum allowed limit of type I error is represented by the significance level (α). There's always a risk that the variations between a sample of data and a population parameter are due to chance rather than the intervention. Statistical significance testing allows us to determine how likely it is that these changes occurred at random and are not related to the intervention. To reduce type-1 error, researchers select the significance level for each statistical test they run. When the null hypothesis is true, the probability ('P') of rejecting it is frequently set to 0.05 (5%). When rejecting the genuine null hypothesis with 95 percent confidence that the conclusion is accurate, it means the researcher is ready to be incorrect 5% of the time, or 5 times out of 100. Fixing at 0.01 (1 percent chance of making a type-1 error) or even 0.001 will be more precise (Risk 0.1 percent of committing type-1 error). As the sample size grows, the degree of significance decreases, the confidence level rises, and the risk of making a type-1 error decreases.

 

9.     A confidence interval (CI): A confidence interval is a set of numbers that is bounded above and below the statistic's mean and that is most likely to contain an unknown population parameter. "If the given parameter is inside the confidence interval, we must reject the alternative hypothesis and adopt the null hypothesis, which implies that there are no discrepancies between the sample data and the population parameter." Most confidence intervals are built with confidence values of 95 percent or 99 percent. The larger the sample size, the smaller the confidence interval, and the higher the confidence level, the less likely you are to make a type-1 error. 15

 

·       CI: Confidence interval

·       x: sample mean

·       z: the z-critical value/confidence level value/level of significance value

·       s: sample standard deviation

·       n: sample size

 

10. Z-critical value: When the sample of data is normally distributed, a critical value of z (Z-score) is utilised. When the population standard deviation is known or when sample sizes are bigger, Z-scores are utilised. Every statistic has a probability, and every probability estimated for a sample has a margin of error, which is why the critical value of 'Z' is used to compute the margin of error. As the sample size increases, margin of error decreases, confidence interval decreases and confidence level increases, because no difference will be observed between mean of sample of data and the mean of population parameter in more sample size, so the critical values move closer to 0 and it is less likely for the sample mean difference to be at any distance from 0, hence chance of committing type-1 error will be minimized.

 

11. Margin of error: The margin of error indicates how much the results will differ from the mean of the population by percentage points. A 95% confidence interval with a 6% margin of error, for example, indicates that the data will be within 6 percentage points of the true population value 95% of the time. The observed score minus the margin of error is the lower bound of the confidence interval; the observed score plus the margin of error is the upper bound. The margin of error is twice the width of the confidence interval or the margin of error is half the width of the total confidence interval. Let's say we have the confidence interval for a population mean as follows: The upper and lower bounds (limits) of the 95 percent confidence interval are 13.5 and 19.5 respectively. The confidence interval's width is 19.5 – 13.5 = 6. 6/2 = 3 would be the margin of error. The larger the sample size, the lower the variability of the sample, the higher the confidence level, the narrower the confidence interval, and the smaller the margin of error, thus lower the likelihood of making a type-1 error. 14

 

12. ‘P’ value: The level of statistical significance (α) is often expressed as a p-value between 0 and 1. That describes the probability (‘P’) of rejecting null hypothesis when it is right. The probability (‘P’) of rejecting true null hypothesis is often fixed at the chance of α= 0.01(1%) or 0.05 (5%) for one tailed test, and it should be fixed at the chance of α/2 for two tailed test (means 0.005 or 0.025).

 

13. Sampling Variability: An investigator must account for variation between the observed dates when estimating the sample size. The term "variability" refers to the range of values that differ between samples when compared to the sample mean. The terms sample error and sampling variability are not interchangeable. Sampling error is defined as the difference between the sample parameter and the population parameter. If the samples are more homogeneous, the variability and sampling error are reduced, requiring a smaller sample size; but, if the variability between samples is greater, a larger sample size is necessary to minimise type-1 error.

 

14.  Sample standard deviation: Standard deviation is the measure of dispersion or variability in the sample of data. The Root-mean Square of the differences between observed data and the sample mean is called the sample standard deviation. Thus as the sample size increases, the standard deviation of the sample of data decreases.

Standard deviation formula for sample mean and population mean

 

 

Xi= Each of the values of the date

X= Sample mean

N=Number of sample

µ= The population mean

S=Standard deviation of sample

σ=Standard deviation of population

 

15. Statistical Power (1-β): In other meaning type II error is in corresponding to power, which means the ability of a statistical test to reject the false null hypothesis (β error) or ability of a statistical test to find statistically significant difference when such a difference actually exists. The power represents the minimum allowable limit of accepting the alternative hypothesis when the alternative hypothesis is true. Ideally, minimum power of a study required is 0.8 or greater; that means we should have 80% or greater chance of finding a statistically significant difference and 20% chance of failing in finding a statistically significant difference. The concept of statistical power is more associated with sample size, effect size, desired significance level and standard deviation of the parameter. The power of the study increases with an increase in sample size; decreased with desired significance level, and increased with increased size of the effect. As the sample size gets larger, the Z value increases therefore we will more likely to reject the true null hypothesis, hence corresponding type II error (β) decreases; thus the power of the test ((1-β) increases. 10


 

Table-1: Z-Critical value and interpretation for various confidence levels or level of significance or power

Confidence level (1–α)

Level of significance α

 Power (1-β)

Z-Critical Value

Confidence interval

Interpretation

Two tailed test (Zα/2)

One tailed test (Zα)

One tailed test (Z 1-β

Two tailed test (Zα/2)

One tailed test (Zα)

80%

0.20 (20%)

.80(80%)

1.28

0.84

0.84

±1.28

±0.84

Considerable

90%

0.10 (10%)

0.90(90%)

1.645

1.28

1.28

±1.645

±1.28

Quiet acceptable

95%

0.05 (5%)

0.95 (95%)

1.960

1.65

1.65

±1.960

±1.65

Acceptable

98%

0.02 (2%)

0.98 (98%)

2.326

2.06

2.06

±2.326

±2.06

Very acceptable

99%

0.01 (1%)

0.99(99%)

2.576

2.33

2.33

±2.576

±2.33

Very acceptable

99.9%

0.001(0.1%)

0.999(99.9%)

3.291

 3.10

 3.10

±3.291

 ±3.10

Very acceptable

 


16. Priori Power analysis:

Power analysis can be used to calculate the required sample size in advance before planning and designing phase, so that investigator can detect an effect size, determine level of significance and level of power of a given sample size. For any power calculation, we have to know type of statistical test planned to use, the decided alpha value or significance level, the expected effect size and the sample size (N) planning to use. When these values are entered in power analysis softwares, a power value between 0 and 1 will be generated. If the power is less than 0.8, we will need to increase sample size.3 As an a priori analysis provides a method for controlling type I and II errors to prove the hypothesis, it is an ideal method of sample size and power calculation

 

17. Post hoc power analysis: In contrast, a post-hoc analysis is typically conducted after the completion of the study. As the sample size N is given, the power level (1-β) is calculated using the given sample size (N), the effect size, and the desired α level. Post-hoc power analysis is a less ideal type of sample size and power calculation than a priori analysis as it only controls α (type-1 error), and not β (type-2 error). Post-hoc power analysis is criticized because the type II error calculated using the results of negative clinical trials is always high, which sometimes leads to incorrect conclusions regarding power. Thus, post-hoc power analysis should be cautiously used for the critical evaluation of studies with large type II errors. 15, 16

18. Effect size: The effect size is also known as the standardised difference.. An effect size is an analytical concept that studies the strength of association between two groups. It is commonly evaluated using Cohen’s D method, where the standard deviation is divided by the difference between the means pertaining to two groups of variables. If the value is < 0.1 it is considered trivial effect, if is between 0.1 - 0.3 it is considered small effect. If it is 0.3 - 0.5 it is considered moderate effect and if it is > 0.5 it is considered large difference effect. This parameter is not dependent on sample size and, therefore, very practical. Statistical power depends upon effect size and sample size. If the effect size of the intervention is large, it is possible to detect such an effect in smaller sample numbers, whereas a smaller effect size would require larger sample sizes. 17

Effect size formula is-

 

ES: Effect size

X1= Mean of group-1,

X2 =Mean of group 2

SD=Pooled standard deviation from either group.

 

 SD12: Standard deviation of mean of group-1

SD22: Standard deviation of mean of group-2


 

Table-2: Sample size based on desired power and effect size calculated from Cohen 1988.

Power

‘d’ (effect size)

.10

20

.30

.40

.50

.60

.70

.80

1.0

1.20

1.40

.25

332

84

38

22

14

10

8

6

5

4

3

.50

769

193

86

49

32

22

17

13

9

7

5

.60

981

246

110

62

40

28

21

16

11

8

6

2/3

1144

287

128

73

47

33

24

19

12

9

7

.70

1235

310

138

78

50

35

26

20

13

10

7

.75

1389

348

155

88

57

40

29

23

15

11

8

.80

1571

393

175

99

64

45

33

26

17

12

9

.85

1797

450

201

113

73

51

38

29

19

14

10

.90

2102

526

234

132

85

59

44

34

22

16

12

.95

2600

651

290

163

105

73

54

42

37

19

14

.99

3675

920

409

231

148

103

76

58

38

27

20

 

Table-3: Summary of components of sample size estimates and sample size:

Components of sample size estimates

Relationship with sample size

One tailed test to get same effect with the same power of previous study compared to two tailed test

Required less sample

Two tailed test e to get same effect with the same power of previous study

Required large sample size

To decrease type-1 error

Required more sample size

To decrease typer-2 error

Required more sample size

When level of confidence increased

Sample size will be increased

When level of significance decreased

Sample size will be increased

When confidence interval decreased

Sample size will be increased

When margin of error decreased

Sample size will be increased

When Sample variability decreased

Required less sample size.

To increase power of the study

Required more Sample size

To increase effect size

Required more Sample size

 

Summary of Relationship with in the components of sample size estimates

Increased poweràincreased effect sizeàDecreased type –II error à decreased level of significance à increased confidence level-àdecreased confidence intervalàdecreased margin of errorà increased sample sizeàMinimized type-1 error.

 

Table-4: Sample size calculation for various research designs:18

Study design

Formulas

Description of formula

Cross‑sectional or Descriptive Studies

 

a)      For unknown population for categorical variables (nominal or ordinal) when estimating the proportion (for one group)à

 

or

 

-        N=sample size

-        Z(1-α/2) = Statistics for level of confidence/standard value (95% confidence at two tailed the Z score is 1.96 and at one tail Z score is 1.645)

-        P=expected prevalence or proportion (If the expected prevalence is 40, then p=0.4)

-        The researcher must know the assumed P since the precision (d) should be chosen based on the amount of P (table-5) If there is no data to support, assumption 'P,' 0.5 can be used to create the most conservative sample sizes.

-        q=1-p

-        d= Precision/absolute error ( if precision is 5% then d=0.05) (corresponding to effect size)

b)     For unknown population for continuous variables (interval or ratio) when estimating the proportion-à

 

-        N=sample size

-        Z(1-α/2) = Statistics for level of confidence/standard value

-        SD=Standard deviation of variables (value of SD can be taken from previous study or through pilot study)

-        d= Precision/absolute error ( if precision is 5% then d=5)

c)      for finite population

 

-        n’ = Sample size

-        n=Population size

-        z= Statistics for level of confidence (Z=1.96 for tailed)

-        e=Margin of error

-        P=population proportion

Case–control Studies

 

a)      For dichotomous variable/nominal or ordinal scale, and when proportion is parameter of the study. (for binary exposure)

 

 

 

-        n = Desired number of samples

-        r = Control to cases ratio (in equal number of cases and controls r=1)

-        Description: P Chart Calculations | P Chart Formula | Quality America = Proportion of population = (P1 +P2 )/2

-        Z1-β = It is the desired power (0.84 for 80% power and 1.28 for 90% power)

-        z1-α/2 = Critical value and a standard value for the corresponding level of confidence. (At 95% CI or 5% type I error it is 1.96 and at 99% CI or 1% type I error it is 2.58)

-        P1 = Proportion in cases

-        P2 = Proportion in controls

b)     For continuous variable/ interval or ratio scale and when mean as a parameter of the study.

 

-        N = Number of samples which we need to find out

-        r = Control to cases ratio

-        p = Proportion of population = P1+P2 /2

-        Z1-β = It is the desired power (0.84 for 80% power and 1.28 for 90% power)

-        z1-α/2 = Critical value and a standard value for the corresponding level of confidence. (At 95% CI it is 1.96 and at 99% CI or 1% type I error it is 2.58)

-        σ = SD which is based on a previous study or pilot study 2

-        d = Effect size (difference in the means from previous studies or pilot study)

Cohort Studies

 

a)      for independent cohort studies

 

-        n = Total number of desired study subjects (case) to identify true relative risk with two-sided Type-I error

-        m = Number of control subjects per case subject

-        Z 1-β = It is the desired power (0.84 for 80% power and 1.28 for 90% power)

-        z1-α/2 = Critical value and a standard value for the corresponding level of confidence. (At 95% CI it is 1.96 and at 99% CI or 1% type I error it is 2.58)

-        p0 = Possibility of event in controls

-        p1 = Possibility of event in experimental

-        p’ = p1+m p0 /m+1

-        nc= continuity corrected sample size

-        Ψ=odds ratio

for paired cohort studies

 

-        n = sample size

-        Z 1-β = It is the desired power (0.84 for 80% power and 1.28 for 90% power)

-        z1-α/2 = Critical value and a standard value for the corresponding level of confidence. (At 95% CI it is 1.96 and at 99% CI or 1% type I error it is 2.58)

-        p0 = Possibility of event in controls

-        p1 = Possibility of event in experimental

r = Control to cases ratio

Comparative studies

a)      For categorical variable nominal or ordinal scale and proportion is parameter of the study

 

-        n = Sample size for one group that we need to find out

-        p1 and p2 = Proportion of two groups

-        C = Standard value for the corresponding level of α and β selected for the study

b)     For continuous variable/interval or ratio scale and mean is parameter of the study

 

- n= sample size

-        d = difference in means of two group (effect size)

-         σ1 = SD of Group 1

-         σ2 = SD of Group 2

-         Z 1-β = It is the desired power

-         z1-α/2 = Critical value and a standard value for the corresponding level of confidence. (At 95% CI it is 1.96 and at 99% CI, or 1% type I error it is 2.58)

c)      For comparison between two groups and for continuous variables

-        n = Sample size for one group that we need to find

-        d = Detected difference in means of two group (effect size)

-        SD = Common standard deviation

-        C = Constant value depends on the value of α and β selected for the study

Experimental Studies

a)      Sample size to rule out the difference (effect size) among two groups (on the basis of difference in proportion or for continuous variables/interval or ratio variables)

 

 

-        N= Sample size for each group

-        𝛿 = Difference in means of two treatment effect

-        𝛿0 = Acceptable margin of error

-        Z(1-α/2) = Statistical level of confidence/Standard value for a one (1.645) or two-tailed (1.96)

-        Z(1-β/2)=Statistical power/standard value (0.84)

-        S2 = = Pooled SD (both comparison groups)

-        or

 

b)     Sample size to rule out the difference (effect size) among two groups (on the basis of difference in proportion or for dichotomous nominal/ordinal variables)

 

-        N= Sample size for each group

-        d = 𝛿 = Difference in means of two treatment effect

-        𝛿0 = Acceptable margin of error

-        Z(1-α/2) = Statistical level of confidence/Standard value for a one (1.645) or two-tailed (1.96)

-        Z(1-β/2)=Statistical power/standard value (0.84)

-        S2 = Pooled SD (both comparison groups)

-        P= Response rate of standard intervention

-        P0 =Response rate of new intervention

-         

 


Table-5: Sample size to Estimate Prevalence with different Precision and 95% of confidence

Precision

Assumed Prevalence

0.05

0.2

0.6

0.01

1825

6147

9220

0.04

114

384

576

0.10

18

61

92

 

CONCLUSION:

The researcher should always calculate sample size which is appropriate to the study design, so that desired power can be obtained to generalize the finding to the population. Without understanding the relationship between the components of sample size it is difficult to decide whether we need more or less sample size for the study chosen. This article briefly reviewed the relationship between the sample size estimates and sample size.

 

CONFLICT OF INTEREST:

There is no conflict of interest.

 

REFERENCES:

1.      Sample size and power. Institute for Work & Health, Toronto. [internet] [Cited Apr 19/2022]. Available from: https://www.iwh.on.ca/what-researchers-mean-by/sample-size-and-power.

2.      Kadam P, Bhalerao S. Sample size calculation. Int J Ayurveda Res. 2010 Jan;1(1):55-7. doi: 10.4103/0974-7788.59946.

3.      Pourhoseingholi MA, Vahedi M, Rahimzadeh M. Sample size calculation in medical studies. Gastroenterol Hepatol Bed Bench. 2013 Winter;6(1):14-7.

4.      Serdar CC, Cihan M, Yücel D, Serdar MA. Sample size, power and effect size revisited: simplified and practical approaches in pre-clinical, clinical and laboratory studies. Biochem Med (Zagreb). 2021 Feb 15;31(1):010502. doi: 10.11613/BM.2021.010502.

5.      Guo, Y., Logan, H.L., Glueck, D.H. et al. Selecting a sample size for studies with repeated measures. BMC Med Res Methodol 13, 100 (2013). https://doi.org/10.1186/1471-2288-13-100.

6.      Noordzij M, Tripep Gi, Dekker FW, Zoccali C, Tanck MW, and Jager KJ. Sample size calculations: basic principles and common pitfalls, Nephrology Dialysis Transplantation. 2010; 25 (5):1388–1393, https://doi.org/10.1093/ndt/gfp732

7.      Dell RB, Holleran S, Ramakrishnan R. Sample size determination. ILAR J. 2002;43(4):207-13. doi: 10.1093/ilar.43.4.207.

8.      Nwachukwu D. ‘5 incredible sample size calculators: software every researcher much have’’. 2015. [Internet] [Cited Apr 20/2022]. Available from: https://nairaproject.com/blog/5-incredible-softwares-that%20will-determine-sample-size-for-research-projects.html

9.      nMaster 2.0 SAMPLE SIZE SOFTWARE [Internet] [Cited Apr 22/2022]. Available fromhttps://www.cmc-biostatistics.ac.in/nmaster/.

10.   Kang H. Sample size determination and power analysis using the G*Power software. J Educ Eval Health Prof. 2021;18:17. doi: 10.3352/jeehp.2021.18.17.

11.   Mohanasundari SK. Padmaja A, Kothari. S & Rathod KK ‘Effectiveness of Music therapy with conventional intervention on preoperative anxiety among children undergoing surgeries in selected hospitals, Rajasthan-A Pilot study. IJPEN. 2020: 6(2): 61 -69. DOI: http://dx.doi.org/10.21088/potj.0974.5777.6220.2.

12.   Definition for confidence level. Statista. [Internet] [Cited Apr 20/2022]. Available from: https://www.statista.com/statistics-glossary/definition/328/confidence_level/

13.   Polit FD and Beck CT. Estimation of parameter. 7th edition. Lippincott Williams and wilkins. P. 495

14.   Margin of error. [Internet] [Cited Apr 22/2022] Available from: https://www.surveymonkey.com/mp/margin-of-error-calculator/

15.   Levine M, Ensom MH. Post hoc power analysis: an idea whose time has passed? Pharmacotherapy. 2001;21:405–409. doi: 10.1592/phco.21.5.405.34503.

16.   Hoenig JM, Heisey DM. The abuse of power: the pervasive fallacy of power calculations for data analysis. Am Stat. 2001;55:19–24. doi: 10.1198/000313001300339897.

17.   Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd Ed. Mahwah NJ: Lawrence Erlbaum Associates; 1988.

18.   Sullivan L. Power and Sample Size Determination. Boston Univeristy School of Public Health. [Internet] [Cuted Apr 30/2022]. Available from: https://sphweb.bumc.bu.edu/otlt/mph-modules/bs/bs704_power/bs704_power_print.

 

 

 

Received on 30.04.2022           Modified on 11.05.2022

Accepted on 19.05.2022        ©A&V Publications All right reserved

Asian J. Nursing Education and Research. 2022; 12(3):317-324.

DOI: 10.52711/2349-2996.2022.00066