Exploratory factor analysis
Home 
design 
Graphs 
ation 

metrics 

cance 

size 
intervals 
integrity 
Completion status: this resource is ~50% complete. 
This page introduces key points about the use of exploratory factor analysis particularly for the purposes of psychometric instrument development. 
Assumed knowledge[edit]
Purposes of factor analysis[edit]
There are two main purposes or applications of factor analysis:
 1. Data reduction
Reducing data to a smaller set of summary variables e.g., psychological questionnaires often aim to measure several psychological constructs, with each construct measured using multiple items which can be combined in a smaller number of factor scores.
 2. Exploring theoretical structure
Theoretical questions about the underlying structure of psychological phenomena can be explored and empirically tested using factor analysis e.g., is intelligence better described as a single, general factor, or as consisting of multiple, independent dimensions?
History[edit]
Factor analysis was initially developed by Charles Spearman in 1904. For more information, see factor analysis history.
Assumptions[edit]
 Normality: Statistical inference is improved if the variables are multivariate normal[1]
 Linear relations between variables Test by visually examining all or at least some of the bivariate scatterplots:
 Is the relationship linear?
 Are there bivariate outliers?
 Factorability is the assumption that there are at least some correlations amongst the variables so that coherent factors can be identified. Basically, there should be some degree of collinearity among the variables but not an extreme degree or singularity among the variables. Factorability can be examined via any of the following:
 Interitem correlations (correlation matrix)  are there at least several sizable correlations e.g., > .5?
 Antiimage correlation matrix diagonals  they should be > ~.5.
 Measures of sampling adequacy (MSAs):
 KaiserMyerOlkin (KMO) (should be > ~.5) and
 Bartlett's test of sphericity (should be significant)
 Sample size: The sample size should be large enough to yield reliable estimates of correlations among the variables:
 Ideally, there should be a large ratio of N / k (Cases / Items) e.g., > ~20:1
 e.g., if there are 20 items in the survey, ideally there would be at least 400 cases)
 EFA can still be reasonably done with > ~5:1
 Bare min. for pilot study purposes, as low as 3:1.
 Ideally, there should be a large ratio of N / k (Cases / Items) e.g., > ~20:1
For more information, see the lecture notes.
Types (methods of extraction)[edit]
There are two main types of extraction:
 Principal components (PC): Analyses all variance in the items, usually preferred when the goal is to reduce a set of variables down to a smaller number of factors and to create composite scores for these factors for use in subsequent analysis.
 Principal axis factoring (PAF): Analyses shared variance amongst the items. Used more often for theoretical explorations of the underlying factor structure.
Rotation[edit]
There are two main types of factor rotation:
 Orthogonal (varimax): Factors are independent, i.e. no correlation between factors
 Oblique (oblimin): Factors are related, with some correlations e.g., over > .3. The extent of correlation allowable between factors is controlled by delta. Delta is a parameter that "controls" the extent of obliqueness amongst the factors:
 Negative values "decrease" factor correlations (towards full orthogonality)
 "0" is the default
 Positive values (don't go over .8) "permit" additional factor correlation. More info about delta
If the researcher hypothesises uncorrelated factors, then use orthogonal rotation, if the researchers hypothesises correlated factors, then use oblique rotation. In practice, researchers will usually try different types of rotation, then decide on the best form of rotation based on the rotation which produces the "cleanest" model (i.e., with lowest crossloadings).
Determine the number of factors[edit]
There is no definitive, simple way to determine the number of factors. This is a subjective decision by the researcher, but the researcher should be guided by several considerations:
 Theory: e.g., How many factors were expected? For new factors, were they expected? Do the extracted factors make theoretical sense?
 Kaiser's criterion: Eigenvalues over 1; but this is arbitrary. Use judgement too about how many factors are going to be extracted for the final model.
 Screeplot: Plots eigenvalues. Look for a notable drop; the rest is 'scree'. Extract the number of factors that form the 'cliff'. Again, use judgement about the meaning of the factors for final decisions.
 Interpretability: Are all factors interpretable? (especially the last one?) In other words, can you reasonably name and describe the items as indicative of an underlying factor?
 Have you tried several different models, with different numbers of factors? Before deciding on the final number of factors make sure to look at solutions for, say, 2, 3, 4, 5, 6 and 7 factors.
 Have you eliminated items which don't seem to belong? (this can change the structure/number of factors)? After you remove items which don't seem to belong, then recheck whether you still have a clear factor structure. It may be that a different number of factors (probably one or two fewer) is now more appropriate.
 Are the factor correlations not too high (e.g., not over ~.7  otherwise the factors may be too similar (and redundant)?)
 Check the factor structure across subsamples (e.g., is the fact structure consistent for males and females?)
Name and describe the factors[edit]
 Give each extracted factor a name
 Read through the items with primary loadings on the factor  what underlying factor do they represent?
 If unsure, emphasise the top loading items in naming the factor
 Describe each factor
 Develop a one sentence definition or description of each factor
Criteria for selecting items[edit]
In general, aim for a simple factor structure (unless you have a particular reason why you think a complex structure would be preferable). In a simple factor structure each item has a relatively strong loading on one factor and relatively small loadings on other factors.
Consider the following criteria to help decide whether to include or remove each variable. Remember that these are rules of thumb only  avoid overreliance on any single indicator. The overarching goal is to include items which contribute to a meaningful measure of the target construct and to remove items that weaken measurement of the target construct. In making this decision, consider:
 Communality (indicates the variance in each item explained by the extracted factors; ideally, above .5)
 Primary (target) factor loading (indicates how strongly each item loads on each factor; should generally be above .5, preferably above .6)
 Item crossloadings (indicates how strongly each item loads on each other factor; should be a gap of at least ~.2 between primary and crossloadings), with crossloadings above .3 being worrisome)
 Meaningful and useful membership to a factor (read over each item to check its whether it makes a meaningful and useful (nonredundant) contribution to an identifiable factor (face validity)
 Reliability (check internal consistency of each factor using Cronbach's alpha and check Alpha if item removed to determine whether removal of any additional items would improve reliability)
 See also: How do I eliminate items? (lecture notes)
Data analysis exercises[edit]
Pros & cons[edit]
 Advantages (Wikipedia)
 Disadvantages (Wikipedia)
Glossary[edit]
 Antiimage correlation matrix: Contains the negative partial covariances and correlations. Diagonals are used as a measure of sampling adequacy (MSA).
 Bartlett test of sphericity: Statistical test for the overall significance of all correlations within a correlation matrix. Used as a measure of sampling adequacy (MSA).
 Common factor: A factor on which two or more variables load.
 Common factor analysis: A statistical technique which uses the correlations between observed variables to estimate common factors and the structural relationships linking factors to observed variables.
 Common variance: Variance in a variable shared with common factors. Factor analysis assumes that a variable's variance is composed of three components: common, specific and error.
 Communality: The proportion of a variable's variance explained by a factor structure. Final communality estimates are the sum of squared loadings for a variable in an orthogonal factor matrix.
 Complex variable: A variable which loads on two or more factors.
 Correlation: The productmoment correlation coefficient.
 Correlation matrix: Table showing the intercorrelations among all variables.
 Data reduction: Reducing the number of cases or variables in a data matrix e.g., factor analysis can be used to replace a large collection of variables with a smaller number of factors.
 Eigenvalue: Column sum of squared loadings for a factor. It conceptually represents that amount of variance accounted for by a factor.
 Error variance: Unreliable and inexplicable variation in a variable. Error variance is assumed to be independent of common variance, and a component of the unique variance of a variable.
 Exploratory factor analysis: A factor analysis technique used to explore the underlying structure of a collection of observed variables.
 Factor: Linear combination of the original variables. Factors represent the underlying dimensions (constructs) that summarise or account for the original set of observed variables.
 Factor analysis: A statistical technique used to (1) estimate factors, or (2) reduce the dimensionality of a large number of variables to a fewer number of factors.
 Factor loading: Correlation between a variable and a factor, and the key to understanding the nature of a particular factor. Squared factor loadings indicate what percentage of the variance in an original variable is explained by a factor.
 Factor matrix: Table displaying the factor loadings of all variables on each factor. Factors are presented as columns and the variables are presented as rows.
 Factor rotation: A process of adjusting the factor axes to achieve a simpler and pragmatically more meaningful factor solution  the goal is a simple factor structure.
 Factor score: Composite measure created for each observation (case) on each factor extracted in the factor analysis. Factor weights are used in conjunction with the original variable values to calculate each observation's score. The factor scores are standardised to according to a zscore.
 Image of a variable: The component of a variable which is predicted from other variables. Antonym: antiimage of a variable.
 Indeterminacy: If it is impossible to estimate population factor structures exactly because an infinite number of factor structures can produce the same correlation matrix, then there are more unknowns than equations in the common factor model, and we say that the factor structure is indeterminate.
 Latent factor: A theoretical underlying factor hypothesised to influence a number of observed variables. Common factor analysis assumes latent variables are linearly related to observed variables.
 Measure of sampling adequacy (MSA): Measures calculated both for the entire correlation matrix and each individual variable evaluating the appropriateness of applying factor analysis.
 Oblique factor rotation: Factor rotation such that the extracted factors are correlated. Rather than arbitrarily constraining the factor rotation to an orthogonal (90 degree angle), the oblique solution identifies the extent to which each of the factors are correlated.
 Orthogonal factor rotation: Factor rotation such that their axes are maintained at 90 degrees. Each factor is independent of, or orthogonal to, all other factors. The correlation between the factors is determined to be zero.
 Parsimony principle: When two or more theories explain the data equally well, select the simplest theory e.g., if a 2factor and a 3factor model explain about the same amount of variance, interpret the 2factor model.
 Principal axis factoring (PAF): A method of factor analysis in which the factors are based on a reduced correlation matrix using a priori communality estimates. That is, communalities are inserted in the diagonal of the correlation matrix, and the extracted factors are based only on the common variance, with specific and error variances excluded.
 Principal component analysis (PC or PCA): (1) The factors are based on the total variance. Unities (1s) are used in the diagonal of the correlation matrix; this procedure computationally implies that all the variance is common or shared. [2] (2) a method of factoring a correlation matrix directly, without estimating communalities. Linear combinations of variables are estimated which explain the maximum amount of variance in the variables. The first component accounts for the most variance in the variables. Then the second component accounts for the most variance in the variables residualised for the first component, and so on. [3]
 Scree plot: A graphical method for determining the number of factors. The eigenvalues are plotted in the sequence of the principal factors. The number of factors is chosen where the plot levels off to a linear decreasing pattern.
 Simple structure: A pattern of factor loading results such that each variable loads highly onto one and only one factor.
 Specific variance: (1) Variance of each variable unique to that variable and not explained or associated with other variables in the factor analysis. [4] (2) The component of unique variance which is reliable but not explained by common factors. [5]
 Unique variance: The proportion of a variable's variance that is not shared with a factor structure. Unique variance is composed of specific and error variance.
 Varimax: The most commonly used factor rotation method; an orthogonal rotation criterion which maximizes the variance of the squared elements in the columns of a factor matrix.
 Glossary sources
 Factor analysis glossary (richmond.edu)
 Factor analysis glossary (siu.edu)
References[edit]
 Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological Methods, 4, 272299.
 Tabachnick, B. G. & Fidell, L. S. (2001). Principal components and factor analysis. In Using multivariate statistics (4th ed., pp. 582  633). Needham Heights, MA: Allyn & Bacon.
See also[edit]
 Wikiversity
 Lecture notes
 Data analysis tutorial
 Internal consistency
 Composite scores
 Psychometric instrument development
 Survey research and design in psychology
 Exploratory factor analysis (Lecture)
 Psychometrics (Tutorial)
 Wikipedia & Wikibooks
 Factor analysis in psychometrics (Wikipedia)
 Principal component analysis (Wikipedia)
 Principal component analysis (Wikibooks)
External links[edit]
 Darlington, R. B., Factor analysis.
 Exploratory factor analysis (Lecture slides on slideshare.net)
 Exploratory factor analysis (Lecture on ucspace.canberra.edu.au)
 Factor analysis links (del.icio.us)
 Factor analysis resources: Understanding & using factor analysis in psychology & the social sciences (Wilderdom)
 Open and free online course on exploratory data analysis (Carnegie Mellon University)
 Principal components and factor analysis (statsoft.com)
 Factor analysis: Principal components factor analysis: Use of extracted factors in multivariate dependency models (bama.ua.edu)
 Sample factor analysis writeup