# Effect size/Data analysis tutorial

Effect size data analysis tutorial

## Effect sizes (Cohen's *d*)[edit | edit source]

### Between groups[edit | edit source]

Cohen's *d* is the recommend effect size for expressing the difference between two means. The following samples involve computing the effect size between two independent means which you can work out using the downloadable calculator. Use Cohensd.xls to work out the following:

- What is the effect size between two means (and how large is it?), where M1 = 10, SD1 = 3, M2 = 12.5, SD2 = 4
- Using a 99% confidence interval,
*N*= 20 in each group, is this a significant difference? - Using a 95% confidence interval,
*N*= 20 in each group, is this a significant difference? - Using a 90% confidence interval,
*N*= 20 in each group, is this a significant difference? - What happens if we increase the
*N*in each group to 100?

- Using a 99% confidence interval,
- What is the effect size between two means (and how large is it?), where M1 = 8, SD1 = 2, M2 = 3.5, SD2 = 2
- Using a 99% confidence interval,
*N*= 40 in each group, is this a significant difference? - Using a 95% confidence interval,
*N*= 40 in each group, is this a significant difference? - Using a 90% confidence interval,
*N*= 40 in each group, is this a significant difference? - What happens if we decrease the
*N*in each group to 10?

- Using a 99% confidence interval,
- What is the effect size between two means (and how large is it?), where M1 = .5, SD 1= .1, M2 = .6, SD2 = .07
- Using a 99% confidence interval,
*N*= 200 in each group, is this a significant difference? - Using a 95% confidence interval,
*N*= 200 in each group, is this a significant difference? - Using a 90% confidence interval,
*N*= 200 in each group, is this a significant difference?

- Using a 99% confidence interval,
- What is the effect size between two means,
*N*= 15, where M1 = 6.5, SD1 = 1, M2 = 7.5, SD2 = 1.1- What is the confidence interval? Using only, the confidence interval, is this difference significant?
- How large would
*N*need to be to get a just significant result? - Using
*N*= 15, how large would M2 need to be to get a just significant result?

### Repeated measures[edit | edit source]

There is controversy about how to compute effect sizes for repeated measures. See Effect Size Measures for Two Dependent Groups:

"The ES computed using the paired

t-test value will always be larger than the ES computed using a between groupst-test value, or the original standard deviations of the scores...However, Dunlop, et al. convincingly argue that the original standard deviations (or the between groupt-test value) should be used to compute ES for correlated designs. They argue that if the pooled standard deviation is corrected for the amount of correlation between the measures, then the ES estimate will be an overestimate of the actual ES....In summary, when you have correlated designs you should use the original standard deviations to compute the ES rather than the pairedt-test value or the within subject'sFvalue."

- If you would like to calculate the repeated measures Cohen's
*d*, based on the pooled standard deviation corrected for the correlation, use this calculator (Note: You need to input the correlation.): Cohensdrepeatedmeasures.xls

## Confidence intervals[edit | edit source]

- Confidence Intervals for Cohen's d effect sizes can be calculated using the above calculators.
- When
*N*increases, the CI decreases - see this java applet. When you hit Run, it will start sampling and show the means and 95 % CIs for each. Stop the applet, change the desired*N*, and start again. Compare the obtained distributions for samples with different*N*s.