# Pearson's chi-square test

A Pearson's chi-square test can be used as an inferential test of the independence of two nominal variables.

The value of the test-statistic is

where

- = the test statistic that approaches a χ
^{2}distribution. - = frequencies observed;
- = frequencies expected (asserted by the null hypothesis).

## Pearson’s Chi Square (χ^2)[edit | edit source]

In-Class Notes; CGU PSYCH 308D - 'Categorical Data Analysis' - Dale Berger, taken on 2014-04-01 by Josh Penman</ref>= Pearson’s Chi Square - (<- Redirect to) Pearson’s χ^2

## Parametric Chi Square[edit | edit source]

#### χ^2 Rules of Thumb[edit | edit source]

- The expected frequencies are at least 10, if you want to be conservative about it.

If you were to take true normal distribution; you would have a Z score - if you randomly took a sample Z score from the distribution and squared it, what bottom value would you have?

A: Zero. (0)

If this distribution is Z^2, what shape do you think it would have? Would it be a nice normal distribution? No: it would be a rapidly descending graph.

## Some formulas that need explaining![edit | edit source]

Z^2 = χ^2(df=1)

Z^2+Z^2=χ^2(df=2)

F(Z^2)

E ((Xi-[μ])/(σx) = 1

The expected value of Chi Squared is equal to the degrees of freedom. If someone says, “oh, I found a Chi Square of 15. . .

Some statisticians use a notation “ν” (pronounced “nu” as in “new”) - but there are other beautiful relationships here:

χ^2_{ν} = [∑]*Z_{i}

Mean -

Kurtosis =

Mode =

- If ν ≤ 2 then Mode = 0
- If ν > 2 then Mode = ν-2

### Example[edit | edit source]

If you wanted to test a die to see whether it’s a fair die, what would you do?

Class Answers: Roll it over and over and see what kind of distribution you get

Roll it over and over and see what kind of distribution you get

- If we roll this die 60 times, how many times would you expect to get a roll of 1? A: 6 (1 out of 6 possible combinations = 1/6; 1/6*60=10.
- Prior to the Pearson’ Chi Squared era, we’d say “this doesn’t look too bad. . . “
- If you randomly rolled a die 60 times and got exactly 10 of each. . . that would be extremely improbable!
- Let’s start by taking the observed frequency minus the expected frequency: (See the Google Example Sheet

- There are 5 degrees of freedom here (J: How do you calculate degree’s of freedom?)
- So here, you have Chi Square with 5 degrees of freedom; χ
_{f,05}^2 at the .05 α level, that zero-value is 11.07 - So, our value is 9.6: have we proven that the die is fair? A: __
- In general, if we collect more information, we’ll get a better answer. Here, we don’t have statistical significance at the [α]=.05 level.

Now, in gambling; having a 1 and 6 being; if you shave a little bit off the 1 side, and a little bit off the six side; - we want to test whether getting a 1 or a 6; now; if this is a theory not driven by the data, but driven by what we know about our friend and the die, we might run a separate test . . .

- What is the observed frequency of getting a 1 or a 6? A: 16 + 16 = 31.
- What is the observed frequencey of everything else? . . . . A: 29.
- What is the expected frequencey of a 1 or a 6, and everyting else? A: 20; 40 for everything else.
- Observed minus expected . . . let’s get the Chi Squared:
- = (31 - 20)^2/20 + (29-40)^2/40
- =
- = 9.07

- Now, how many degrees of freedom do we have here?
- Two observed frequencies
- But we have a constraint on here: it’s out of 60; there’s only one piece of information, because if we know the frequency of getting a 1 or a 6, we also know the frequency of getting everything else.

- So, Chi Squared with one degree of freedom at α=.005 the Chi Squared cutoff is 7.88. If you were to calculate the actual p value for this, it would be __

The first test we had, you could call a “blob test - 5 degrees of freedom. . . but if you have a test with 1 degree of freedom, that’s more of a laser test.

S^2 = ([∑](Xi-x̅)^2)/(n-1)

## Application of Chi Square[edit | edit source]

[w:Fisher Fisher] had 3 degrees of freedom - with 3, the mode is at one, and so forth. He took what [w:/Mendel Mendel] predicted, and what Mendel observed;

He expected a 3; but got a chi square that was very close: he got a closer model than what one would expect by chance. Fisher had a charitable explanation: that Mendel had over-enthusiastic assistants.

Others have analyzed this too, and made arguments that maybe it wasn’t as bad as Fisher made it out to be, but if the data fits the model too well, that may imply that something fishy (i.e., suspicious) is going on here.

Here’s an issue: If you did a χ^2 test with 1 degree of freedom, is this a 1-Tailed test or a 2-Tailed test?

- If you get a big χ^2 with one degree of freedom, it’s actually a 2-tailed test: if you want to do a 1-tailed test, you would need to

Question: With Mendel’s work, in genetics, do we really expect a random distribution? A: if parents, each with one blue eyed gene and one brown-eyed gene have children; what’s the probability that they’ll have blue eyes? 1/4th (J: I’m not sure if this is biologically accurate . . .) - same thing with the peas.

- You have a null hypothesis that you’re dealing with here: each plant is a cross of two particular peas, and 1/4th of them should turn out one way or the other.

Q: How did you get the 11.07 as the cutoff value above? A: That comes from a table.

## Other Applications[edit | edit source]

Now we have a little two-by-two table: you can find this in Packet 5

- See Step 2 in the Google Spreadsheet
- What % of people with BA are on Salary? A: 85%
- What % of people without a BA are on Salary? A: 60%

If there is no relationship between salary and education level, what would we expect those frequencies to be? (Refer to Step 2.1 Independence Hypothesis here

- Null hypothesis is “Independence
- You would expect 25% of 40; 10, as the expected frequency;

The df = (# of Rows-1) * (# of columns -1)

- In this case, a 2-by-2 table; it’s (2-1)*(2-1) = 1*1 = 1; df (<- Redirect to Degrees of Freedom) = 1

There’s a neat little hand calculation you can do here:

- For a 2x2 table:
- χ^2 = (N(a*d-B*c)^2)/(a+b)*(c+d)*(a+c)*(b+d)
- Where:
- a is top left cell;
- b is top right cell;
- c is bottom left cell;
- d is bottom right cell

- χ^2 = (N(a*d-B*c)^2)/(a+b)*(c+d)*(a+c)*(b+d)

Now, if you wanted to do __; sometimes called Yate’s Correction for continuity; people have talked about how important it is to apply this. (See https://docs.google.com/spreadsheets/d/12s4TcLMNEvfKl_rVmAmXA5IWqJ2EcxTbzH_PFZtZJ-E/edit#gid=1502721222 Example 3)

There’ only one circumstance where you apply this that is appropriate; the situation is called Fixed Marginals - The Median Test is like this:

- You have 80 people; and you’re going to runs samples 20 people, and divide them into two groups.
- You decide a ahead of time you’re going to have 40 people in each group

- You expect marginal values of 40 for each; if you know the margins before collecting the data, that is called fixed margins.
- That is the only case in which you will use the Continuity correction (J: in the Behavioral Sciences as of 2014-04-01)
- Now, your expected frequencies, if you had independent samples, you would expect to find 20 people in each cell.
- Reduce |fo-fe| by .5 (“fo”= Frequency Observed; “”fe” = Frequency Expected; “|__|” = Absolute Values.

## Non-Parametric Chi Square and Fisher’s Exact Test[edit | edit source]

For cases where you have very small frequencies, you can’t apply this test of independence: Let’s take an example, where we’ve got a program where people start, then drop out.

Now, for this you can calculate the Multinomial outcome;

- the probability of this exact outcome; give the observed marginals =
- ((a+b)!*(c+d)!*(a+c)!*(b+d)!)/(n!*A!*b!*c!*d!)
- (5!*6!*4!*7!)/(11!*0!*5!*4!*2!)
- = (6*5*4*3*2*1*7!)/11*10*9*8(7!*2!)
- = 6/(11*2*3*2)
- = 1/(11*2)
- = .045

- (5!*6!*4!*7!)/(11!*0!*5!*4!*2!)

## McNemar’s Test[edit | edit source]

Now, let’s look at McNemar’s Test (Pronounced “Mac-no-mar” by professor) (See Example 5

- p(+|Judge1) = p(+|Judge 2)
- The null hypothesis says that the given data is a random outcome of a 50-50 spit; binomial distribution; people have various formulas for this to approximate: You can use a:
- Chi Square formula: χ^2 = (ABS(b-c)-1)^2/(b+c)
- = (|15-5|-1)^2/(15+5)
- = 9^2/20 = 81/20 = ‘’4.05’’
- We would say that Judge 2 is ___ (J: QUESTION 1!). This is an ‘’’approximation’’’.

- Z formula for Approximation of the Binomial Distribution (with Yate’s Correction) This is what you actually want to apply: the binomial distribution (also called, “binomial test, with
- Z = ‘’(ABS’’(x-Np)’’-.5)’’/ SQRT(n*p*q) where:
- x = Number of hits you got; the number observed in that cell: (use cell b)
- P = Likelihood of a hit for cell b
- Q = Likelihood of a miss for cell b
- N = b+c = 20
- p = .05

- Resulting p = .021 < .05, therefore we have statistical significance (note: current best practice uses 95% Confidence Interval, not just P values

- Z = ‘’(ABS’’(x-Np)’’-.5)’’/ SQRT(n*p*q) where:

With a small set of data, the McNemar’s Chi Squared Test is not accurate. I would suggest you never use it. In SPSS’s table, for a Chi Square tests, we had a superscript that indicated that a binomial distribution was used for the McNemar Test - that could be similar to SPSS telling you not to use McNemar’s test as well:)

Example 6: Other applications: How people would vote from one year to the next.

## See also[edit | edit source]

- Linear correlation
- Pearson's chi-square test (Wikipedia)