Internal consistency
Appearance
Internal consistency (or internal reliability) is a way of measuring the inter-correlation amongst a set of measurement items which are used to measure a single construct.
Internal consistency can be measured using:
- Split-half reliability: Correlation between the total score for the first half of the items with the total score for the second half of the items
- Odd-even reliability: Correlation between the total score for items 1, 3, 5 etc. and the total score for the items 2, 4, 6 etc.
- Cronbach's alpha (α): Average of all the possible split-half correlations.
Cronbach's α can range between 0 and 1
- The more items, generally the higher the internal reliability will be
- General rule of thumb:
- .6 = OK
- .7 = Good
- .8 = Very Good
- .9 = Excellent
- >.95 = too high; items are too inter-related and therefore some are redundant
Scenario
[edit | edit source]- Question: Cronbach's for a set of measurement items is .76. If one of the items is removed, the alpha will be .77. Should the item be removed?
- Answer: Maybe. Investigate the item more closely. How does it relate to the underlying factor? What are its correlations with other items in the factor? Then make a decision. Do you think the measure of the underlying factor is better with or without the item? Why or why not?
See also
[edit | edit source]- Cronbach's alpha (Wikipedia)
- Internal consistency (Wikipedia)
- Reliability
- Psychometric instrument development (Lecture)
- Internal consistency (Data analysis tutorial)