Multiple linear regression/Assumptions

From Wikiversity
Jump to: navigation, search
Multiple linear regression - Assumptions
Gnome-settings-background.svg View the accompanying screencast: [1]

Level of measurement[edit]

  1. IVs: Two or more continuous (interval or ratio) or dichotomous variables - it may be necessary to recode multichotomous categorical or ordinal IVs and non-normal interval or ratio IVs into dichotomous variables or a series of dummy variables)
  2. DV: One continuous (interval or ratio) variable

Sample size[edit]

  1. Some rules of thumb:
    1. Enough data is needed to provide reliable estimates of the correlations. Use at least 50 cases and at least 10 to 20 as many cases as there are IVs (as the number of IVs increases, more inferential tests are being conducted (if testing each predictor), therefore more data is needed), otherwise the estimates of the regression line are probably unstable and are unlikely to replicate if the study is repeated.
    2. Green (2001) and Tabachnick and Fidell (2007) suggest:
      1. 50 + 8(k) for testing an overall regression model and
      2. 104 + k when testing individual predictors (where k is the number of IVs)
      3. These sample size suggestions are based on detecting a medium effect size (β >= .20), with critical α <= .05, with power of 80%.
        To be more accurate, study-specific power and sample size calculations should be conducted (e.g. use A-priori sample Size calculator for multiple regression; note that this calculator uses f2 for the anticipated effect size - see the Formulas link for how to convert R2 to to f2).

Normality[edit]

  1. Check the univariate descriptive statistics (M, SD, skewness and kurtosis)
  2. Check the histograms with a normal curve imposed
  3. Be wary (avoid!) using inferential tests of normality (e.g., the Shapiro–Wilk test - they are notoriously overly sensitive for the purposes/needs of regression).
  4. Estimates of correlations will be more reliable and stable when the variables are normally distributed, but regression will be reasonably robust to minor to moderate deviations from non-normal data when moderate to large sample sizes are involved. More important is the examination of scatterplots for bivariate outliers (non-normal univariate data may make bivariate and multivariate outliers more likely).
  5. Further information:

Linearity[edit]

  1. Are the bivariate relationships linear?
  2. Check scatterplots and correlations between the DV (Y) and each of the IVs (Xs)
  3. Check for influence of bivariate outliers

Homoscedasticity[edit]

  1. Are the bivariate distributions reasonably evenly spread about the line of best fit?
  2. Check scatterplots between Y and each of Xs and/or check scatterplot of the residuals (ZRESID) and predicted values (ZPRED))

Multicollinearity[edit]

  1. Screencast: [2]
  2. Is there multicollinearity between the IVs? Predictors should not be overly correlated with one another. Ways to check:
    1. Examine bivariate correlations and scatterplots between each of the IVs (i.e., are the predictors overly correlated - above ~.7?).
    2. Check the collinearity statistics in the coefficients table:
      1. Various recommendations for acceptable levels of VIF and Tolerance have been published.
      2. Variance Inflation Factor (VIF) should be low (< 3 to 10) or
      3. Tolerance should be high (> .1 to .3)
      4. Note that VIF and Tolerance have a reciprocal relationship (i.e., TOL=1/VIF), so only one of the indicators needs to be used.
  3. For more information, see [3]

Multivariate outliers[edit]

  1. MVOs[4][5][6]
  2. Check whether there are influential MVOs using Mahalanobis' Distance (MD) and/or Cook’s D (CD).
  3. SPSS: Linear Regression - Save - Mahalanobis (can also include Cook's D)
    1. After execution, new variables called mah_1 (and coo_1) will be added to the data file.
    2. In the output, check the Residuals Statistics table for the maximum MD and CD.
    3. The maximum MD should not exceed the critical chi-square value with degrees of freedom (df) equal to number of predictors, with critical alpha =.001. CD should not be greater than 1.
  4. If outliers are detected:
    1. Go to the data file, sort the data in descending order by mah_1, identify the cases with mah_1 distances above the critical value, and consider why these cases have been flagged (these cases will each have an unusual combination of responses for the variables in the analysis, so check their responses).
    2. Remove these cases and re-run the MLR.
      1. If the results are very similar (e.g., similar R2 and coefficents for each of the predictors), then it is best to use the original results (i.e., including the multivariate outliers).
      2. If the results are different when the MVOs are not included, then these cases probably have had undue influence and it is best to report the results without these cases.

Normality of residuals[edit]

  1. Residuals are more likely to be normally distributed if each of the variables normally distributed
  2. Check histograms of all variables in an analysis
  3. Normally distributed variables will enhance the MLR solution

See also[edit]

  1. Four assumptions of multiple regression that researchers should always test (Osborne & Waters, 2002)

References[edit]

  1. Allen & Bennett 13.3.2.1 Assumptions (pp. 178-179)
  2. Francis 5.1.4 Practical Issues and Assumptions (pp. 126-128)
  3. Green, S. B. (1991). How many subjects does it take to do a regression analysis?. Multivariate Behavioral Research, 26, 499-510.
  4. Knofczynski, G. T., & Mundfrom, D. (2008). Sample sizes when using multiple linear regression for prediction. Educational and Psychological Measurement, 68, 431-442.
  5. Wilson Van Voorhis, C. R. & Morgan, B. L. (2007). Understanding power and rules of thumb for determining sample sizes. Tutorials in Quantitative Methods for Psychology, 3(2), 43-50.