| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • You already know Dokkio is an AI-powered assistant to organize & manage your digital files & messages. Very soon, Dokkio will support Outlook as well as One Drive. Check it out today!

View
 

EDHE6210_20100426

Page history last edited by Starr Hoffman 13 years, 12 months ago

04.26.2010

remember: no class next week!

PAPER: hard copy in Dr. Henson's mailbox by 5pm, Wednesday, May 5

 

we won't get through all the topics in this class,  so some of the four articles he handed out tonight cover what we  won't go over in the class.

  • (regression discontinuity is a way to think about quasi-experimental design. best way to know you have something is to replicate it; best way to know what causes something is experimental design (assuming random sampling)...discontinuity is another way to look at cause/effect relationships--weaker than true experiments, but better than nothing.)
  • ANCOVA paper (follow-up to last week's topic); GOOD EXAMPLE of model for our paper

 

Planned Contrast

 

omnibus ANOVA:

  • an ANOVA that looks at everything
  • if you have 4 groups, how many pairwise mean comparisons are there? (k * k-1 / 2 ) =  6 
    • simple comparisons: one mean with one other mean
    • complex comparisons: one mean with combined means or combined means to combined means
    • if you add complex to simple comparisons, there are many more mean comparisons 
  • in the past, we've typically only done simple comparisons in ANOVAs 
  • what happens if you get a statistically significant F in your ANOVA analysis? what does that tell you in the ANOVA? 
    • that you have at least one mean difference. (could be more, could be one)
    • you have no clue where that difference is
    • so you run post-hoc tests; these compare each mean with every other mean 
      • ALWAYS do post-hoc tests, unless you areonly studying 2 groups 
    • why not run t-tests at first?  
      • significant tolerance for Type I Error
      • the tolerance for error, p = .05, in a t-test is .05+.05 (for as many tests as you have); cumulative error = experiment-wise or family-wise Type I error
    • post-hoc tests are designed to lower the test-wise alpha rate (the tolerance for Type I error); so the overall error rate will still be .05 
      • if alpha is lowered too much (too many means compared), it will be harder and harder to achieve statistical significance 
      • omnibus ANOVA is the worst case, because you are comparing everything possible, thus lowering alpha significantly
      • if you have specific questions, you can run a Planned Contrast Analysis (A-Priori Contrast, or Planned Comparison) 

 

specific questions means you want to compare with other specific means.

(bull fighting metaphor: the point is to kill the bull, but to do it slowly and stylistically.)

planned contrasts are a lot like bull fights;

  • stylistically-driven
  • purposeful questions
  • theory-driven 

how are the research questions you have relevant to your theory.

BUT if you ask the wrong questions, you might miss the mean comparison thats there, because you excluded the wrong information.

 

Sum of Squares total = total variance in the dependent variable.

in an ANOVA, we break that variability into pieces:

  • differences between group means (explainable variance)
  • differences among the scores within each of the groups (regression calls this: error, residual; this is what can't be explained)

 

instead of asking an omnibus question, we're going to ask specific questions

how many questions can you ask of your data?

  • degrees of freedom: tells us how many questions we can ask. a df is the price you pay for asking a research question of your data; you burn on df for every question you ask. 
  • if there are 4 groups/means, we can ask 3 questions.
  • think about what you want to find and where you expect the differences to be.
  • drawback: if you ask the wrong questions, you may miss a mean difference somewhere. 
  • ONLY DO this if you have a specific theory to ask of your data. this is NOT a method of serendipitous research discovery.
  • YOU ALWAYS want to ask AS MANY QUESTIONS as your degrees of freedom (even if it's not technically theory-driven). 

 

asking 3 specific research hypotheses carves the "between" (explainable variance) into 3 pieces.

  • the sum of these three places add up to the eta-squared (explainable variance) of the omnibus ANOVA.
  • to combine groups for comparison, you need to create a new variable (re-code the data).
    • there are many ways to do this; we are just learning one method today. 
    • example: 2-way ANOVA design, AxB, 4x3
    • what is the minimum amount of people you need for your study? at least 2 people per cell 4x3 = 12 cells, so 24 people at BARE MINIMUM. 
    • that means (fully crossed ANOVA), there are 6 people per group AT BARE MINIMUM.
    • there are 8 groups.
    • (SEE HANDOUT: wrote example of how to code this group on paper)
    • we are setting it up in this hierarchical coding matter so that they are ORTHOGONAL to each other.
    • why? we want to look at UNIQUE variance, each explaining a different piece of the variance, so that we don't look at overlapping explanation.

 

FOR QUALS, look at the back of the 6010 textbook (green book) and look at the appendix on different ways to code things 

 

(see more notes on paper here.) 

 

EXAMPLE:

see Tucker chapter

  • he's entering the data from Table 1
  • (group, dv)

 

create contrast variables, so:

  • file: new: syntax

 

take grouping variable and convert to numbers, so you can compare them

6 groups, 5 degrees of freedom

c1 = first contrast variable; making them all 0's, but want group 1 to have -1 and group 2 to have 1.

 

COMPUTE c1=0.

IF (group=1) c1=-1.

IF (group=2) c1=1.

 

COMPUTE c2=0.

IF (c1 NE 0) c2=-1.

IF (group=3) c2=2.

 

COMPUTE c3=0.

IF (c2 NE 0) c3=-1.

IF (group=4) c3=3.

 

COMPUTE c4=0.

IF (c3 NE 0) c4=-1.

IF (group=5) c4=4.

 

COMPUTE c5=0.

IF (c4 NE 0) c5=-1.

IF (group=6) c5=5.

 

EXECUTE.

 

now we want to run a regression comparing the dependent variable with the contrast variables.

 

DO THIS IN SPSS's WYSIWYG:

linear regression, DV = dependent variable

Independent= PAY ATTENTION HERE

add c1, THEN CLICK NEXT (above "independent") 

add c2, THEN CLICK NEXT

add c3, CLICK NEXT

add c4, CLICK NEXT

add c5, CLICK NEXT

click PASTE  (now it's in your synatx file)

 

RUN the regression.

 

so we're looking at the ANOVA table, and we're going to take that data to make a NEW TABLE.

WE WILL BE DOING THIS ON THE  FINAL.

 

Source | SOS | df | MS | Fcalc.

c1 | 0 | 1 | do the MATH for Mean of Squares and for F-calc

c2 | 0 (SOS = model 2 SOS - model 1 SOS) | 1 (also burned ONE df here) |

c3 | 0 (SOS = SOS3 - SOS2) | 1 |

c4 | 0 (SOS = SOS4 - SOS3) | 1 |

c5 | 375 (SOS5 - SOS4) | 1 |

error | 300 (residual from model 5) | 6 (leftover df) 

total | SUM all those above | 11 (n-1)

 

Your F will be different from what it is in the ANOVA table.

Use a F-table to determine if it's statistically significant.

 

F = MSexplained (MSc5) / MSerror

MS = SOS / df

 

Final 

 

definitely need a Results and a Discussion section.

SEE HANDOUT for more details.

Comments (0)

You don't have permission to comment on this page.