## Experimental Design & Analysis for Psychology

### Herve Abdi, Betty Edelman, Dominique Valentin, and W. Jay Dowling

## Table of Contents

**1 Introduction to Experimental Design **

1.1. General overview

1.2. Independent and dependent variables

1.3. Independent variables

1.4. Dependent variables

1.5. Common defective experimental designs

1.6. The choice of subjects and the representative design of experiments

1.7. Key notions of the chapter

**2 Correlation **

2.1. Introduction

2.2. Correlation: Overview and Example

2.3. Rationale and computation of the coefficient of correlation

2.4. Interpreting correlation and scatterplots

2.5. The importance of scatterplots

2.6. Correlation and similarity of distributions

2.7. Correlation and Z-scores

2.8. Correlation and causality

2.9. Squared correlation as common variance

2.10. Key notions of the chapter

2.11. Key formulas of the chapter

2.12. Key questions of the chapter

**3 Statistical Test: The F test **

3.1. Introduction

3.2. Statistical Test

3.3. For experts: Not zero is not enough!

3.4. Key notions of the chapter

3.5. New notations

3.6. Key formulas of the chapter

3.7. Key questions of the chapter

**4 Simple Linear Regression **

4.1. Generalities

4.2. The regression line is the "best-fit" line

4.3. Example: Reaction Time and Memory Set

4.4. How to evaluate the quality of prediction

4.5. Partitioning the total sum of squares

4.6. Mathematical Digressions

4.7. Key notions of the chapter

4.8. New notations

4.9. Key formulas of the chapter

4.10. Key questions of the chapter

**5 Orthogonal Multiple Regression **

5.1. Generalities

5.2. The regression plane is the "best-fit" plane

5.3. Back to the example: Retroactive interference

5.4. How to evaluate the quality of the prediction

5.6. F tests for the simple coefficients of correlation

5.7. Partitioning the sums of squares

5.8. Mathematical Digressions

5.9. Key notions of the chapter

5.10. New notations

5.11. Key formulas of the chapter

5.12. Key questions of the chapter

**6 Non-Orthogonal Multiple Regression **

6.1. Generalities

6.2. An example: Age, speech rate and memory span

6.3. Computation of the regression plane

6.4. How to evaluate the quality of the prediction

6.5. Semi-partial correlation as increment in explanation

6.5. F tests for the semi-partial correlation coefficients

6.6. What to do with more than two independent variables

6.7. Bonus: Partial correlation

6.8. Key notions of the chapter

6.9. New notations

6.10. Key formulas of the chapter

6.11. Key questions of the chapter

**7 ANOVA One Factor: Intuitive Approach **

7.1. Introduction

7.2. Intuitive approach

7.3. Computation of the F ratio

7.4. A bit of computation: Mental Imagery

7.5. Key notions of the chapter

7.6. New notations

7.7. Key formulas of the chapter

7.8. Key questions of the chapter

**8 One Factor, S(A): Test, Computation, & Effect Size **

8.1. Statistical test: A refresher

8.2. An example: back to mental imagery

8.3. Another more general notation: A and S(A)

8.4. Presentation of the results of the ANOVA

8.5. ANOVA with two groups: F and t

8.6. Another example: Romeo and Juliet

8.7. How to estimate the effect size

8.8. Computational formulas

8.9. Key notions of the chapter

8.10. New notations

8.11. Key formulas of the chapter

8.12. Key questions of the chapter

**9 One Factor, S(A): Regression Point of View **

9.1. Introduction

9.2. Example 1: Memory and Imagery

9.3. Analysis of variance for Example 1

9.4. Regression approach for Example 1: Mental Imagery

9.5. Equivalence between regression and analysis of variance

9.6. Example 2: Romeo and Juliet

9.7. f regression and analysis of variance are one thing, why keep two different techniques?

9.8. Digression

9.9. Multiple regression and analysis of variance

9.10. Key notions of the chapter

9.11. Key formulas of the chapter

9.12. Key questions of the chapter

**10 Design: S(A): Score Model **

10.1. The score model

10.2. ANOVA with one random factor (Model II)

10.3. The Score Model: Model II

10.4. F 1 or The Strawberry Basket!

10.5. Three exercises

10.6. Key notions of the chapter

10.7. New notations

10.8. Key formulas of the chapter

10.9. Key questions of the chapter

**11 The Assumptions of Analysis of Variance **

11.1. Overview

11.2. Validity assumptions

11.3. Testing the Homogeneity of variance assumption

11.4. Example

11.5. Testing Normality: Lilliefors

11.6. Notation

11.7. Numerical example

11.8. Numerical approximation

11.9. Transforming scores

11.10. Key notions of the chapter

11.11. New notations

11.12. Key formulas of the chapter

11.13. Key questions of the chapter

**12 Planned Orthogonal Comparisons **

12.1. General overview

12.2. What is a contrast?

12.3. The different meanings of alpha

12.4. An example: Context and Memory

12.5. Checking the independence of two contrasts

12.6. Computing the sum of squares for a contrast

12.7. An other view: Contrast analysis as regression

12.8. Critical values for the statistical index

12.9. Back to the Context

12.10. Significance of F vs. specific contrasts

12.11. How to present the results of orthogonal comparisons?

12.12. The omnibus F is a mean

12.13. Sum of orthogonal contrasts: Subdesign analysis

12.14. Key notions of the chapter

12.15. New notations

12.16. Key formulas of the chapter

12.17. Key questions of the chapter

**13 Planned Non-orthogonal Comparisons **

13.1. General Overview

13.2. The classical approach

13.3. Multiple regression: The return!

13.4. Key notions of the chapter

13.5. New notations

13.6. Key formulas of the chapter

13.7. Key questions of the chapter

**14 Post hoc or a-posteriori analyses **

14.1. Introduction

14.2. Scheff ´e's test: All possible contrasts

14.3. Pairwise comparisons

14.4. Key notions of the chapter

14.5. New notations

14.6. Key questions of the chapter

**15 Two Factors, S(A ×** **B) **

15.1. Introduction

15.2. Organization of a two-factor design: A × B

15.3. Main effects and interaction

15.4. Partitioning the experimental sum of squares

15.5. Degrees of freedom and mean squares

15.6. The Score Model (Model I) and the sums of squares

15.7. An example: Cute Cued Recall

15.8. Score Model II: A and B random factors

15.9. ANOVA A × B (Model III): one factor fixed, one factor random

15.10. Index of effect size

15.11. Statistical assumptions and conditions of validity

15.12. Computational formulas

15.13. Relationship between the sources

5.14. Key notions of the chapter

15.15. New notations

15.16. Key formulas of the chapter

15.17. Key questions of the chapter

**16 Factorial designs and** **contrasts **

16.1. Introduction

16.2. Fine grained partition of the standard decomposition

16.3. Contrast and standard decomposition

16.4. What error term should be used?

16.5. Example: partitioning the standard decomposition

16.6. Contrasts non-orthogonal to the canonical decomposition

16.7. A posteriori Comparisons

**17 One Factor Repeated Measures design, S × A **

17.1. Introduction

17.2. Examination of the F Ratio

17.3. Partitioning the SSwithin: S(A) = S + SA

17.4. Computing F in an S × A design

17.5. A numerical example: S × A design

17.6. Score Model: Model I and II for repeated measures designs

17.7. Estimating the size of the experimental effect

17.8. Problems with repeated measures

17.9. An example with computational formulas

17.10. Another example: Proactive interference

17.11. Score model (Model I) S × A design: A fixed

17.12. Score model (Model II) S × A design: A random

17.13. Key notions of the chapter

17.14. New notations

17.15. Key formulas of the chapter

17.16. Key questions of the chapter

**18 Two Factors Completely Repeated Measures: S × A × B **

18.1. Introduction

18.2. An example: Plungin'!

18.3. Sum of Squares, Means squares and F ratios

18.4. Score model (Model I), S × A × B design: A and B fixed

18.5. Results of the experiment: Plungin'

18.6. Score Model (Model II): S × A × B design, A and B random

18.7. Score Model (Model III): S × A × B design, A fixed, B random

18.8. Quasi-F: F'

18.9. A cousin F''

18.10. Validity assumptions, measures of intensity, key notions, etc

18.11. New notations

18.12. Key formulas of the chapter

**19 Two Factors Partially Repeated Measures: S(A) × B **

19.1. Introduction

19.2. An Example: Bat and Hat

19.3. Sums of Squares, Mean Squares, and F ratio

19.4. The comprehension formula routine

19.5. The 13 points computational routine

19.6. Score model (Model I), S(A) × B design: A and B fixed

19.7. Score model (Model II), S(A) × B design: A and B random

19.8. Score model (Model III), S(A) × B design: A fixed and B random

19.9. Coefficients of Intensity

19.10. Validity of S(A) × B designs

19.11. Prescription

19.12. New notations

19.13. Key formulas of the chapter

19.14. Key questions of the chapter

**20 Nested Factorial Designs: S × A(B) **

20.1. Introduction

20.2. An Example: Faces in Space

20.3. How to analyze an S × A(B) design?

20.4. Back to the example: Faces in Space

20.5. What to do with A fixed and B fixed

20.6. When A and B are random factors

20.7. When A is fixed and B is random

20.8. New notations

20.9. Key formulas of the chapter

20.10. Key questions of the chapter

**21 How to derive expected values for any design **

21.1. Crossing and nesting refresher

21.2. Finding the sources of variation

21.3. Writing the score model

21.4. Degrees of freedom and sums of squares

21.5. An example

21.6. Expected values

21.7. Two additional exercises

**A ** **Descriptive Statistics **

**B The sum sign: P **

**C Expected Values **

**D Elementary Probability: A Refresher **

**E Probability Distributions **

**F The Binomial Test **

**G Statistical tables **