Free US shipping over $10
Proud to be B-Corp

Statistical Methods in Education and Psychology Gene V. Glass

Statistical Methods in Education and Psychology By Gene V. Glass

Statistical Methods in Education and Psychology by Gene V. Glass

Condition - Good
Only 4 left


Stresses the understanding, applications, and interpretation of concepts. The selection of topics was guided by three considerations: what are the most useful statistical methods? Which statistical methods are the most widely used in journals in the behavioral and social sciences? Which statistical methods are fundamental to further study?

Faster Shipping

Get this product faster from our US warehouse

Statistical Methods in Education and Psychology Summary

Statistical Methods in Education and Psychology by Gene V. Glass

The approach of Statistical Methods in Education and Psychology, Third Edition, is conceptual rather than mathematical. The authors stress the understanding, applications, and interpretation of concepts rather than derivation and proof or hand-computation. Selection of topics in the book was guided by three considerations: (1) What are the most useful statistical methods?; (2) Which statistical methods are the most widely used in journals in the behavioral and social sciences?; and (3) Which statistical methods are fundamental to further study?

Table of Contents

Each chapter begins with Introduction and concludes with Case Study, Chapter Summary, Suggested Computer Activities, Mastery Test Answers to Mastery Test, and Problems and Exercises.


1. Introduction.

The Image of Statistics.

Descriptive Statistics.

Inferential Statistics.

Statistics and Mathematics.

Case Method.

Our Targets.

2. Measurement, Variables, and Scales.

Variables and their Measurement.

Measurement: The Observation of Variables.

Measurement Scales; Nominal Measurement.

Ordinal Measurement.

Interval Measurement.

Ratio Measurement.

Interrelationships among Measurement Scales.

Continuous and Discrete Variables.

3. Frequency Distributions and Visual Displays of Data.

Tabulating Data.

Grouped Frequency Distributions.

Grouping and Loss of Information.

Graphing a Frequency Distribution: The Histogram.

Frequency and Percentage Polygons.

Type of Distribution.

Cumulative Distributions and the Ogive Curve.


Box-and-Whisker Plots.

Stem-and-Leaf Displays.

Time-Series Graphs.

Misleading Graphs-How to Lie with Statistics.

4. Measures of Central Tendency.

The Mode.

The Median.

Summation Notation.

The Mean.

More Summation Notation.

Adding or Subtracting a Constant.

Multiplying or Dividing by a Constant.

Sum of Deviations.

Sum of Squared Deviations.

The Mean of the Sum of Two or More Scores.

The Mean of a Difference.

Mean, Median, and Mode of Two or More Groups.

Interpretation of Mode, Median, and Mean.

Central Tendency and Skewness.

Measures of Central Tendency as Inferential Statistics.

Which Measure is Best?

5. Measures of Variability.

The Range.

H-Spread and the Interquartile Range.

Deviation Scores.

Sum of Squares.

More about the Summation Operator, -.

The Variance of a Population.

The Variance Estimated From a Sample.

The Standard Deviation.

The Effect of Adding or Subtracting a Constant on Measures of Variability.

The Effect of Multiplying or Dividing a Constant on Measures of Variability.

Variance of a Combined Distribution.

Inferential Properties of the Range, s2, and s.

6. The Normal Distribution and Standard Scores.

The Importance of the Normal Distribution.

God Loves the Normal Curve.

The Standard Normal Distribution as a Standard Reference Distribution: z-Scores.

Ordinates of the Normal Distribution.

Areas Under the Normal Curve.

Other Standard Scores.


Areas Under the Normal Curve in Samples.




Normalized Scores.

7. Correlation: Measures of Relationship Between Two Variables.

The Concept of Correlation.


The Measurement of Correlation.

The Use of Correlation Coefficients.

Interpreting r as a Percent.

Linear and Curvilinear Relationships.

Calculating the Pearson Product-Moment Correlation Coefficient, r.


Correlation Expressed in Terms of z-Scores.

Linear Transformations and Correlation.

The Bivariate Normal Distribution.

Effects of Variability on Correlation.

Correcting for Restricted Variability.

Effect of Measurement Error on r and the Correction for Attenuation.

The Pearson r and Marginal Distributions.

The Effect of the Unit of Analysis on Correlation: Ecological Correlations.

The Variance of a Sum.

The Variance of a Difference.

Additional Measures of Relationship: The Spearman Rank Correlation.

The Phi Coefficient: Both X and Y are Dichotomies.

The Point Biserial Coefficient.

The Biserial Correlation.

Biserial versus Point-Biserial Correlation Coefficients.

The Tetrachoric Coefficient.

Causation and Correlation.

8. Regression and Prediction.

Purposes of Regression Analysis.

The Regression Effect.

The Regression Equation Expressed in Standard z-Scores.

Use of Regression Equations.

Cartesian Coordinates.

Estimating Y from X: The Raw-score Regression Equation.

Error of Estimate.

Proportion of Predictable Variance.

Least-squares Criterion.

Homoscedasticity and the Standard Error of Estimate.

Regression and Pretest-Posttest Gains.

Part Correlation.

Partial Correlation.

Second-Order Partial Correlation.

Multiple Regression and Multiple Correlation.

The Standardized Regression Equation.

The Raw-Score Regression Equation.

Multiple Correlation.

Multiple Regression Equation with Three or More Independent Variables.

Stepwise Multiple Regression.

Illustration of Stepwise Multiple Regression.

Dichotomous and Categorical Variables as Predictors.

The Standard Error of Estimate in Multiple Regression.

The Multiple Correlation as an Inferential Statistic: Correction for Bias.


Curvilinear Regression and Correlation.

Measuring Non-linear Relationships between Two Variables.

Transforming Non-linear Relationships into Linear Relationships.

Dichotomous Dependent Variables: Logistic Regression.

Categorical Dependent Variables more than Two Categories: Discriminant Analysis.

9. Probability.

Probability as a Mathematical System.

First Addition Rule of Probabilities.

Second Addition Rule of Probabilities.

Multiplication Rule of Probabilities.

Conditional Probability.

Bayes's Theorem.



Binomial Probabilities.

The Binomial and Sign Test.

Intuition and Probability.

Probability as an Area.

Combining Probabilities.

Expectations and Moments.

10. Statistical Inference: Sampling and Interval Estimation.


Populations and Samples: Parameters and Statistics.

Infinite versus Finite Populations.

Randomness and Random Sampling.

Accidental or Convenience Samples.

Random Samples.


Systematic Sampling.

Point and Interval Estimates.

Sampling Distributions.

The Standard Error of the Mean.

Relationship of sx to n.

Confidence Intervals.

Confidence Intervals when s is Known: An Example.

Central Limit Theorem: A Demonstration.

The Use of Sampling Distributions.

Proof that s2 = s2/n.

Properties of Estimators.



Relative Efficiency.

11. Introduction to Hypothesis Testing.

Statistical Hypotheses and Explanations.

Statistical versus Scientific Hypotheses.

Testing Hypotheses about ??.

Testing H0: ?? = K, a One-Sample z-Test.

Two Types of Errors in Hypothesis Testing.

Hypothesis Testing and Confidence Intervals.

Type-II Error, b, and Power.


Effect of a on Power.

Power and the Value Hypothesized in the Alternative Hypothesis.

Methods of Increasing Power.

Non-Directional and Directional Alternatives: Two-Tailed versus One- Tailed Tests.

Statistical Significance versus Practical Significance.

Confidence Limits for the Population Median.

Inference Regarding ?? when s is not Known: t versus z.

The t-Distribution.

Confidence Intervals Using the t-Distribution.

Accuracy of Confidence Intervals when Sampling Non-Normal Distributions.

12. Inferences about the Difference Between Two Means.

Testing Statistical Hypotheses Involving Two Means.

The Null Hypotheses.

The t-Test for Comparing Two Independent Means.

Computing sx1-x2.

An Illustration.

Confidence Intervals about Mean Differences.

Effect Size.

t-Test Assumptions and Robustness.

Homogeneity of Variance.

What if Sample Sizes Are Unequal and Variances Are Heterogeneous: The Welch t' Test.

Independence of Observations.

Testing H0: ??1 = ??2 with Paired Observations.

Direct Difference for the t-Test for Paired Observations.

Cautions Regarding the Matched-Pairs Designs in Research.

Power when Comparing Means.

Non-Parametric Alternatives: The Mann-Whitney Test and the Wilcoxon Signed-Rank Test.

13. Statistics for Categorical Dependent Variables: Inferences about Proportions.


The Proportion as a Mean.

The Variance of a Proportion.

The Sampling distribution of a Proportion: The Standard Error of p.

The Influence of n on sp.

Influence of the Sampling Fraction on sp.

The Influence of P on sp.

Confidence Intervals for P.

Quick Confidence Intervals for P.

Testing H0: P = K.

Testing Empirical versus Theoretical Distributions: Chi-Square Goodness of Fit Test.

Testing Differences among Proportions: The Chi-Square Test of Association.

Other Formulas for the Chi-Square Test of Association.

The C2 Median Test.

Chi-Square and the Phi Coefficient.

Independence of Observations.

Inferences about H0: P1 = P2 when Observations are Paired: McNemar's Test for Correlated Proportions.

14. Inferences about Correlation Coefficient.

Testing Statistical Hypotheses Regarding r.

Testing H0: r = 0 Using the t-Test.

Directional Alternatives: Two-Tailed vs. One- Tailed Tests.

Sampling Distribution of r.

The Fisher Z-Transformation.

Setting Confidence Intervals for r.

Determining Confidence Intervals Graphically.

Testing the Difference between Independent Correlation Coefficients: H0: r1 = e2 = ...ej.

Averaging r's.

Testing Differences between Two Dependent Correlation Coefficients: H0: e31 = r32.

Inferences about Other Correlation Coefficients.

The Point-Biserial Correlation Coefficient rpr.

Spearman's Rank Correlation: H0: ranks = 0.

Partial Correlation: H0: r12.3 = 0.

Significance of a Multiple Correlation Coefficient.

Statistical Significance in Stepwise Multiple Regression.

Significance of the Biserial Correlation Coefficient rbis.

Significance of the Tetrachoric Correlation Coefficient rtet.

Significance of the Correlation Ratio Eta.

Testing for Non-linearity of Regression.

15. One-Factor Analysis of Variance.

Why Not Several t-Tests?

ANOVA Nomenclature.

ANOVA Computation.

Sum of Squares Between, SSB.

Sum of Squares Within, SSW.

ANOVA Computational Illustration.

ANOVA Theory.

Mean Square Between Groups, MSB.

Mean Square Within Groups, MSW.

The F-Test.

ANOVA with Equal n's.

A Statistical Model for the Data.

Estimates of the Terms in the Model.

Sum of Squares.

Restatement of the Null Hypothesis in Terms of Population Means.

Degrees of Freedom.

Mean Squares: The Expected Value of MSW.

The Expected Value of MSB.

Some Distribution Theory.

The F-Test of the Null Hypothesis: Rationale and Procedure.

Type-I versus Type-II Errors: a and b.

A Summary of Procedures for One-Factor ANOVA.

Consequences of Failure to Meet the ANOVA Assumptions: The Robustness of ANOVA.

The Welch and Brown-Forsythe Modifications of ANOVA: What Does One Do When -'s and n's Differ?

The Power of the F-Test.

An Illustration.

Power When s is Unknown.

A Table for Estimating Power When J=2.

The Non-Parametric Alternative: The Krukal-Wallis Test.

16. Inferences About Variances.

Chi-Square Distributions.

Chi-Square Distributions with u1: c 2u.

The Chi-Square Distribution with u Degrees of Freedom, c2u.

Inferences about the Population Variance: H0: s2 = K.


Inferences about Two Independent Variances: H0: s21 = s22.

Testing Homogeneity of Variance: Hartley's Fmax Test.

Testing Homogeneity Variance from J Independent Samples: The Bartlett Test.

Other Tests of Homogeneity of Variance: The Levene and Brown-Forsythe Tests.

Inferences about H0: s21 = s22 with Paired Observations.

Relationships among the Normal, t, c2 and F-Distributions.

17. Multiple Comparisons and Trend Analysis.

Testing All Pairs of Means: The Studentized Range Statistic, q.

The Tukey Method of Multiple Comparisons.

The Effect Size of Mean Differences.

The Basis for Type-I Error Rate: Contrast vs. Family.

The Newman-Keuls Method.

The Tukey and Newman-Keuls Methods Compared.

The Definition of a Contrast.

Simple versus Complex Contrasts.

The Standard Error of a Contrast.

The t-Ratio for a Contrast.

Planned versus Post Hoc Comparisons.

Dunn (Bonferroni) Method of Multiple Comparisons.

Dunnett Method of Multiple Comparisons.

Scheffe Method of Multiple Comparisons.

Planned Orthogonal Contrasts.

Confidence Intervals for Contrasts.

Relative Power of Multiple Comparison Techniques.

Trend Analysis.

Significance of Trend Components.

Relation to Trends to Correlation Coefficients.

Assumptions of MC Methods.

Multiple Comparisons for Other Statistics.

Chapter Summary and Criteria for Selecting a Multiple Comparison Method.

18. Two and Three Factor ANOVA: An Introduction to Factorial Designs.

The Meaning of Interaction.

Interactions and Generalizability: Factors Do Not Interact.

Interactions and Generalizability: Factors Interact.

Interpreting when Interaction is Present.

Statistical Significance and Interaction.

Data Layout and Notation.

A Model for the Data.

Least-Squares of the Model.

Statement of Null Hypotheses.

Sums of Squares in the Two-Factor ANOVA.

Degrees of Freedom.

Mean Squares.

Illustration of the Computation for the Two-Factor ANOVA.

Expected Values of Mean Squares.

The Distribution of the Mean Squares.

Determining Power in Factorial Designs.

Multiple Comparisons in Factorial ANOVA Designs.

Confidence Intervals for Means in Two-Factor ANOVA.

Three-Factor ANOVA.

Three-Factor ANOVA: An Illustration.

Three-Factor ANOVA Computation.

The Interpretation of Three-Factor Interaction.

Confidence Intervals in Three-Factor ANOVA.

How Factorial Designs Increase Power.

Factorial ANOVA with Unequal n's.

19. Multi-Factor ANOVA Designs: Random, Mixed, and Fixed Effects.

The Random-Effects ANOVA Model.

Assumptions of the Random ANOVA Model.

An Example.

Mean Square, MSW.

Mean Square, MSB.

The Variance Component, sa2.

Confidence Interval for sa2/se2.

Summary of Random ANOVA Model.

The Mixed-Effects ANOVA Model.

Mixed-Model ANOVA Assumptions.

Mixed-Model ANOVA Computation.

Multiple Comparisons in the Two-Factor Mixed Model.

Crossed and Nested Factors.

Computation of Sums of Squares for Nested Factors.

Determining the Sources of Variation in the ANOVA Table.

Degrees of Freedom for Nested Factors.

Determining Expected Mean Squares.

Error Mean Square in Complex ANOVA Designs.

The Incremental Generalization Strategy: Inferential Concentric Circles.

Model Simplification and Pooling.

The Experimental Unit and the Observational Unit.

20. Repeated- Measures ANOVA.

A Simple Repeated-Measures ANOVA.

Repeated-Measures Assumptions.

Trend Analysis on Repeated-Measures Factors.

Estimating Reliability via Repeated-Measures ANOVA.

Repeated-Measures Designs with a Between-Subjects Factor.

Repeated-Measures ANOVA with Two Between-Subjects Factors.

Trend Analysis on Between-Subjects Factors.

Repeated-Measures ANOVA with Two Within-Subjects Factors and Two Between-Subjects Factors.

Repeated-Measures ANOVA vs. MANOVA.

21. An Introduction to the Analysis of Covariance.

The Functions of ANCOVA.

ANOVA Results.


ANCOVA Computations, SStotal.

The Adjusted Within Sum of Squares, SS'W.

The Adjusted Sum of Squares Between Groups, SS'B.

Degrees of Freedom in ANCOVA and the ANCOVA Table.

Adjusted Means, Y'j.

Confidence Intervals and Multiple Comparisons for Adjusted Means.

ANCOVA Illustrated Graphically.

ANCOVA Assumptions.

ANCOVA Precautions.

Covarying and Stratifying.

Appendix: Tables

Table A. Unit-Normal (z) Distribution.

Table B. Random Digits.

Table C. t-Distribution.

Table D. c2-Distribution.

Table E. Fisher Z-Transformation.

Table F. F-Distribution.

Table G. Power Curves for the F-Test.

Table H. Hartley's Fmax Distribution.

Table I. Studentized Range Statistic: q-Distribution.

Table J. Critical Values of r.

Table K. Critical Values of rranks, Spearman's Rank Correlation.

Table L. Critical Values for the Dunn (Bonferroni) t-Statistic.

Table M. Critical Values for the Dunnett t-Statistic.

Table N. Coefficients (Orthogonal Polynomials) for Trend Analysis.

Table O. Binomial Probabilities when P = .5.

Glossary of Symbols.


Author Index.

Subject Index.

Additional information

Statistical Methods in Education and Psychology by Gene V. Glass
Used - Good
Pearson Education (US)
Book picture is for illustrative purposes only, actual binding, cover or edition may vary.
This is a used book - there is no escaping the fact it has been read by someone else and it will show signs of wear and previous use. Overall we expect it to be in good condition, but if you are not entirely satisfied please get in touch with us

Customer Reviews - Statistical Methods in Education and Psychology