# User:Jtneill/Presentations/More complex summary statistics

**More complex summary statistics: Effect sizes**

Statistics Networking Day, 6 August, 2015, UC

## Contents

## Statistics in psychology[edit]

Year | Unit | Topics | Enrolment |
---|---|---|---|

1st | Introduction to Psychological Research | Research design, univariate descriptive statistics, bivariate linear correlations, t-tests |
~200-250 |

2nd | Experimental Psychology | Experimental design, ANOVAs, non-parametric tests of differences | ~150 |

3rd | Survey Research and Design in Psychology | Survey design, correlations, exploratory factor analysis, multiple linear regression | ~140 |

4th | Research Methods and Professional Ethics | Advanced multivariate statistics (ANOVA and MLR) | ~25-30 |

## What's your favourite complex summary statistic?[edit]

- What would you choose?
- For me the choice is easy - although I wasn't taught how to use effect sizes in undergraduate psychology, it is the statistic that I use the most and find most useful.

## My favourite: Effect sizes[edit]

- Social sciences have tended to over-emphasise null hypothesis significance testing and under-emphasise use of effect sizes:

"I believe that the almost universal reliance on merely refuting the null hypothesis as the standard method for corroborating substantive theories in the soft areas is a terrible mistake, is basically unsound, poor scientific strategy, and one of the worst things that ever happened in the history of psychology"

(Meehl, 1978, p. 817) - Statistical significance is a function of effect size, sample size, and probability level - but most often, people want to know about the effect size (i.e., "how strong is the relationship or how big is the difference?).
- Effect sizes can be re-expressed as other effect sizes or other common language formats such as percentages.

## Types of effect sizes[edit]

Type | Correlations | Means |
---|---|---|

Bivariate | r |
d |

Multivariate | R |

## Graphing effect sizes[edit]

## Confidence intervals[edit]

- Accompanying ESs with CIs offers the best of both worlds - i.e., indicates the size of an observed effect and uncertainty

## Take-home message[edit]

- In applied social science research, people generally want to know about the strength of relationship or the size of the difference - so, report effect sizes. Don't rely solely on inferential statistical testing.
- Graph effect sizes and include confidence intervals.

## References[edit]

- Hattie, J., Marsh, H. W., Neill, J. T., & Richards, G. E. (1997). Adventure education and Outward Bound: Out-of-class experiences that make a lasting difference.
*Review of Educational Research*,*67*, 43-87. - Meehl, P. E. (1978). Theoretical risks and tabular asterisks: Sir Karl, Sir Ronald, and the slow progress of soft psychology.
*Journal of Consulting and Clinical Psychology*,*46*, 806-834. - Neill, J. T. (2008).
*Enhancing life effectiveness: The impacts of outdoor education programs*. Unpublished doctoral dissertation, Faculty of Education, University of Western Sydney, NSW, Australia.