Exploratory factor analysis

From Wikiversity
Jump to: navigation, search


SRDP
Home
Survey
design
Descr/
Graphs
Correl-
ation
EFA
Psycho-
metrics
MLR
Signifi-
cance
Power
Effect
size
Conf.
intervals
Acad.
integrity

Lectures | Tutorials | Assessment

Progress-0500.svg Completion status: this resource is ~50% complete.
This page introduces key points about the use of exploratory factor analysis particularly for the purposes of psychometric instrument development.

Assumed knowledge[edit]

Purposes of factor analysis[edit]

There are two main purposes or applications of factor analysis:

1. Data reduction

Reducing data to a smaller set of summary variables e.g., psychological questionnaires often aim to measure several psychological constructs, with each construct measured using multiple items which can be combined in a smaller number of factor scores.

2. Exploring theoretical structure

Theoretical questions about the underlying structure of psychological phenomena can be explored and empirically tested using factor analysis e.g., is intelligence better described as a single, general factor, or as consisting of multiple, independent dimensions?

History[edit]

Factor analysis was initially developed by Charles Spearman in 1904. For more information, see factor analysis history.

Assumptions[edit]

  1. Normality: Statistical inference is improved if the variables are multivariate normal[1]
  2. Linear relations between variables Test by visually examining all or at least some of the bivariate scatterplots:
    1. Is the relationship linear?
    2. Are there bivariate outliers?
  3. Factorability is the assumption that there are at least some correlations amongst the variables so that coherent factors can be identified. Basically, there should be some degree of collinearity among the variables but not an extreme degree or singularity among the variables. Factorability can be examined via any of the following:
    1. Inter-item correlations (correlation matrix) - are there at least several sizable correlations e.g., > .5?
    2. Anti-image correlation matrix diagonals - they should be > ~.5.
    3. Measures of sampling adequacy (MSAs):
      • Kaiser-Myer-Olkin (KMO) (should be > ~.5) and
      • Bartlett's test of sphericity (should be significant)
  4. Sample size: The sample size should be large enough to yield reliable estimates of correlations among the variables:
    1. Ideally, there should be a large ratio of N / k (Cases / Items) e.g., > ~20:1
      1. e.g., if there are 20 items in the survey, ideally there would be at least 400 cases)
    2. EFA can still be reasonably done with > ~5:1
    3. Bare min. for pilot study purposes, as low as 3:1.

For more information, see the lecture notes.

Types (methods of extraction)[edit]

There are two main types of extraction:

  1. Principal components (PC): Analyses all variance in the items, usually preferred when the goal is to reduce a set of variables down to a smaller number of factors and to create composite scores for these factors for use in subsequent analysis.
  2. Principal axis factoring (PAF): Analyses shared variance amongst the items. Used more often for theoretical explorations of the underlying factor structure.

Rotation[edit]

There are two main types of factor rotation:

  1. Orthogonal (varimax): Factors are independent, i.e. no correlation between factors
  2. Oblique (oblimin): Factors are related, with some correlations e.g., over > .3. The extent of correlation allowable between factors is controlled by delta. Delta is a parameter that "controls" the extent of obliqueness amongst the factors:
    • Negative values "decrease" factor correlations (towards full orthogonality)
    • "0" is the default
    • Positive values (don't go over .8) "permit" additional factor correlation. More info about delta

If the researcher hypothesises uncorrelated factors, then use orthogonal rotation, if the researchers hypothesises correlated factors, then use oblique rotation. In practice, researchers will usually try different types of rotation, then decide on the best form of rotation based on the rotation which produces the "cleanest" model (i.e., with lowest cross-loadings).

Determine the number of factors[edit]

There is no definitive, simple way to determine the number of factors. This is a subjective decision by the researcher, but the researcher should be guided by several considerations:

  1. Theory: e.g., How many factors were expected? For new factors, were they expected? Do the extracted factors make theoretical sense?
  2. Kaiser's criterion: Eigen-values over 1; but this is arbitrary. Use judgement too about how many factors are going to be extracted for the final model.
  3. Scree-plot: Plots eigen-values. Look for a notable drop; the rest is 'scree'. Extract the number of factors that form the 'cliff'. Again, use judgement about the meaning of the factors for final decisions.
  4. Interpretability: Are all factors interpretable? (especially the last one?) In other words, can you reasonably name and describe the items as indicative of an underlying factor?
  5. Have you tried several different models, with different numbers of factors? Before deciding on the final number of factors make sure to look at solutions for, say, 2, 3, 4, 5, 6 and 7 factors.
  6. Have you eliminated items which don't seem to belong? (this can change the structure/number of factors)? After you remove items which don't seem to belong, then re-check whether you still have a clear factor structure. It may be that a different number of factors (probably one or two fewer) is now more appropriate.
  7. Are the factor correlations not too high (e.g., not over ~.7 - otherwise the factors may be too similar (and redundant)?)
  8. Check the factor structure across sub-samples (e.g., is the fact structure consistent for males and females?)

Name and describe the factors[edit]

  1. Give each extracted factor a name
    1. Read through the items with primary loadings on the factor - what underlying factor do they represent?
    2. If unsure, emphasise the top loading items in naming the factor
  2. Describe each factor
    1. Develop a one sentence definition or description of each factor

Criteria for selecting items[edit]

In general, aim for a simple factor structure (unless you have a particular reason why you think a complex structure would be preferable). In a simple factor structure each item has a relatively strong loading on one factor and relatively small loadings on other factors.

Consider the following criteria to help decide whether to include or remove each variable. Remember that these are rules of thumb only - avoid over-reliance on any single indicator. The overarching goal is to include items which contribute to a meaningful measure of the target construct and to remove items that weaken measurement of the target construct. In making this decision, consider:

  1. Communality (indicates the variance in each item explained by the extracted factors; ideally, above .5)
  2. Primary (target) factor loading (indicates how strongly each item loads on each factor; should generally be above .5, preferably above .6)
  3. Item cross-loadings (indicates how strongly each item loads on each other factor; should be a gap of at least ~.2 between primary and cross-loadings), with cross-loadings above .3 being worrisome)
  4. Meaningful and useful membership to a factor (read over each item to check its whether it makes a meaningful and useful (non-redundant) contribution to an identifiable factor (face validity)
  5. Reliability (check internal consistency of each factor using Cronbach's alpha and check Alpha if item removed to determine whether removal of any additional items would improve reliability)
  6. See also: How do I eliminate items? (lecture notes)

Data analysis exercises[edit]

Pros & cons[edit]

Glossary[edit]

  1. Anti-image correlation matrix: Contains the negative partial covariances and correlations. Diagonals are used as a measure of sampling adequacy (MSA).
  2. Bartlett test of sphericity: Statistical test for the overall significance of all correlations within a correlation matrix. Used as a measure of sampling adequacy (MSA).
  3. Common factor: A factor on which two or more variables load.
  4. Common factor analysis: A statistical technique which uses the correlations between observed variables to estimate common factors and the structural relationships linking factors to observed variables.
  5. Common variance: Variance in a variable shared with common factors. Factor analysis assumes that a variable's variance is composed of three components: common, specific and error.
  6. Communality: The proportion of a variable's variance explained by a factor structure. Final communality estimates are the sum of squared loadings for a variable in an orthogonal factor matrix.
  7. Complex variable: A variable which loads on two or more factors.
  8. Correlation: The product-moment correlation coefficient.
  9. Correlation matrix: Table showing the inter-correlations among all variables.
  10. Data reduction: Reducing the number of cases or variables in a data matrix e.g., factor analysis can be used to replace a large collection of variables with a smaller number of factors.
  11. Eigenvalue: Column sum of squared loadings for a factor. It conceptually represents that amount of variance accounted for by a factor.
  12. Error variance: Unreliable and inexplicable variation in a variable. Error variance is assumed to be independent of common variance, and a component of the unique variance of a variable.
  13. Exploratory factor analysis: A factor analysis technique used to explore the underlying structure of a collection of observed variables.
  14. Factor: Linear combination of the original variables. Factors represent the underlying dimensions (constructs) that summarise or account for the original set of observed variables.
  15. Factor analysis: A statistical technique used to (1) estimate factors, or (2) reduce the dimensionality of a large number of variables to a fewer number of factors.
  16. Factor loading: Correlation between a variable and a factor, and the key to understanding the nature of a particular factor. Squared factor loadings indicate what percentage of the variance in an original variable is explained by a factor.
  17. Factor matrix: Table displaying the factor loadings of all variables on each factor. Factors are presented as columns and the variables are presented as rows.
  18. Factor rotation: A process of adjusting the factor axes to achieve a simpler and pragmatically more meaningful factor solution - the goal is a simple factor structure.
  19. Factor score: Composite measure created for each observation (case) on each factor extracted in the factor analysis. Factor weights are used in conjunction with the original variable values to calculate each observation's score. The factor scores are standardised to according to a z-score.
  20. Image of a variable: The component of a variable which is predicted from other variables. Antonym: anti-image of a variable.
  21. Indeterminacy: If it is impossible to estimate population factor structures exactly because an infinite number of factor structures can produce the same correlation matrix, then there are more unknowns than equations in the common factor model, and we say that the factor structure is indeterminate.
  22. Latent factor: A theoretical underlying factor hypothesised to influence a number of observed variables. Common factor analysis assumes latent variables are linearly related to observed variables.
  23. Measure of sampling adequacy (MSA): Measures calculated both for the entire correlation matrix and each individual variable evaluating the appropriateness of applying factor analysis.
  24. Oblique factor rotation: Factor rotation such that the extracted factors are correlated. Rather than arbitrarily constraining the factor rotation to an orthogonal (90 degree angle), the oblique solution identifies the extent to which each of the factors are correlated.
  25. Orthogonal factor rotation: Factor rotation such that their axes are maintained at 90 degrees. Each factor is independent of, or orthogonal to, all other factors. The correlation between the factors is determined to be zero.
  26. Parsimony principle: When two or more theories explain the data equally well, select the simplest theory e.g., if a 2-factor and a 3-factor model explain about the same amount of variance, interpret the 2-factor model.
  27. Principal axis factoring (PAF): A method of factor analysis in which the factors are based on a reduced correlation matrix using a priori communality estimates. That is, communalities are inserted in the diagonal of the correlation matrix, and the extracted factors are based only on the common variance, with specific and error variances excluded.
  28. Principal component analysis (PC or PCA): (1) The factors are based on the total variance. Unities (1s) are used in the diagonal of the correlation matrix; this procedure computationally implies that all the variance is common or shared. [2] (2) a method of factoring a correlation matrix directly, without estimating communalities. Linear combinations of variables are estimated which explain the maximum amount of variance in the variables. The first component accounts for the most variance in the variables. Then the second component accounts for the most variance in the variables residualised for the first component, and so on. [3]
  29. Scree plot: A graphical method for determining the number of factors. The eigenvalues are plotted in the sequence of the principal factors. The number of factors is chosen where the plot levels off to a linear decreasing pattern.
  30. Simple structure: A pattern of factor loading results such that each variable loads highly onto one and only one factor.
  31. Specific variance: (1) Variance of each variable unique to that variable and not explained or associated with other variables in the factor analysis. [4] (2) The component of unique variance which is reliable but not explained by common factors. [5]
  32. Unique variance: The proportion of a variable's variance that is not shared with a factor structure. Unique variance is composed of specific and error variance.
  33. Varimax: The most commonly used factor rotation method; an orthogonal rotation criterion which maximizes the variance of the squared elements in the columns of a factor matrix.
Glossary sources

References[edit]

  1. Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4, 272-299.
  2. Tabachnick, B. G. & Fidell, L. S. (2001). Principal components and factor analysis. In Using multivariate statistics (4th ed., pp. 582 - 633). Needham Heights, MA: Allyn & Bacon.

See also[edit]

Wikiversity
Wikipedia & Wikibooks

External links[edit]