Educational Statistics and Research Methods (ESRM) Program*
University of Arkansas
Published
October 9, 2024
Modified
October 11, 2024
Today’s Class
Today's Class
Multivariate regression via path model
Model modification
Comparing and contrasting path analysis
Differences in model fit measures
How to interpret the results of path model
library(ESRM64503)library(kableExtra)library(tidyverse)library(DescTools) # Desc() allows you to quick screen datalibrary(lavaan) # Desc() allows you to quick screen data# options(digits = 3)head(dataMath)
id hsl cc use msc mas mse perf female
1 1 NA 9 44 55 39 NA 14 1
2 2 3 2 77 70 42 71 12 0
3 3 NA 12 64 52 31 NA NA 1
4 4 6 20 71 65 39 84 19 0
5 5 2 15 48 19 2 60 12 0
6 6 NA 15 61 62 42 87 18 0
dim(dataMath)
[1] 350 9
Path Analysis
Previous models considered simple pairwise correlations among observed variables (perf and use). These models may be limited in answering more complex research questions, such as whether variable A mediates the relationship between variable B and variable C
Path analysis is used to analyze multivariate linear models where outcomes can be also predictors
Path analysis details:
Model identification
Modeling workflow
Example Analyses
Procedure:
Model building
Model fitting
Model summary
Today’s Example Data
Data are simulated based on the results reported in:
Sample of 350 undergraduates (229 women, 121 men)
In simulation, 10% of data points were missing (using missing completely at random mechanism).
TipDictionary
Prior Experience at High School Level (HSL)
Prior Experience at College Level (CC)
Perceived Usefulness of Mathematics (USE)
Math Self-Concept (MSC)
Math Anxiety (MAS)
Math Self-Efficacy (MSE)
Math Performance (PERF)
Female (sex variable: 0 = male; 1 = female)
Multivariate Linear Regression Path Diagram
This path diagram illustrates a multivariate regression model examining relationships among six constructs related to mathematics education
Red solid arrows represent hypothesized direct effects (regression paths) between variables
Blue dashed arrows indicate residual variances for endogenous variables
The model shows:
Female as the sole exogenous variable (predictor with no antecedents)
Five endogenous variables: High School Math Experience, College Math Experience, Mathematics Self-Efficacy, Mathematics Self-Concept, Mathematics Performance, and Perceived Usefulness
Multiple direct and indirect pathways through which gender and prior experiences influence mathematics outcomes
A complex network of relationships suggesting mediation effects through self-beliefs (self-efficacy and self-concept)
The Big Picture
Path analysis is a multivariate statistical method that, when using an identity link, assumes the variables in an analysis are multivariate normally distributed
Mean vectors
Covariance matrices
By specifying simultaneous regression equations (the core of path models), a very specific covariance matrix is implied
This is where things deviate from our familiar R matrix
Like multivariate models, the key to path analysis is finding an approximation to the unstructured (saturated) covariance matrix
With fewer parameters, if possible
The art to path analysis is in specifying models that blend theory and statistical evidence to produce valid, generalizable results
Types of Variables in Path Model
Endogenous variable(s): variables whose variability is explained by one or more variables in a model
In linear regression, the dependent variable is the only endogenous variable in an analysis
Mathematics Performance (PERF) and Mathematics Usefulness (USE)
Exogenous variable(s): variables whose variability is not explained by any variables in a model
In linear regression, the independent variable(s) are the exogenous variables in the analysis
Female (F)
Procedure of Path Analysis Steps
Identification of Path Models
Model identification is necessary for statistical models to have “meaningful” results
For path models, identification can be very difficult
Because of their unique structure, path models must have identification in two ways:
“Globally” – so that the total number of parameters does not exceed the total number of means, variances, and covariances of the endogenous and exogenous variables
“Locally” – so that each individual equation is identified
Model identification is guaranteed if a model is both “globally” and “locally” identified
Global Identification: “T-rule”
A necessary but not sufficient condition for a path models is that of having equal to or fewer model parameters than there are “distributional parameters”
Distributional parameters: As the path models we discuss assume the multivariate normal distribution, we have two matrices of parameters
The mean vector
The covariance matrix
For the MVN, the so-called T-rule states that a model must have equal to or fewer parameters than the unique elements of the covariance matrix of all endogenous and exogenous variables (the sum of all variables in the analysis)
Let s=p+q, the total of all endogenous (p) and exogenous (q) variables
Then the total unique elements are 2s(s+1)
More on the “T-rule”
The classical definition of the “T-rule” counts the following entities as model parameters:
Direct effects (regression slopes)
Residual variances
Residual covariances
Exogenous variances
Exogenous covariances
Missing from this list are:
The set of exogenous variable means
The set of intercepts for endogenous variables
Each of the missing entities are part of the likelihood function, but are considered “saturated” so no additional parameters can be added (all parameters are estimated)
These do not enter into the equation for the covariance matrix of the endogenous and exogenous variables
T-rule Identification Status
Just-identified: number of observed covariances = number of model parameters
Necessary for identification, but no model fit indices available
Over-identified: number of observed covariances > number of model parameters
Necessary for identification; model fit indices available
Under-identified: number of observed covariances < number of model parameters
Model is NOT IDENTIFIED: No results available
Our Destination: Overall Path Model
Based on the theory described in the introduction to Pajares & Miller (1994), the following model was hypothesized – use this diagram to build your knowledge of path models
Pajares, F., & Miller, M. D. (1994). Role of self-efficacy and self-concept beliefs in mathematical problem solving: A path analysis. Journal of Educational Psychology, 86(2), 193–203. https://doi.org/10.1037/0022-0663.86.2.193
Overall Path Model: How to Inspect
Path Model Setup - Questions for the Analysis
How many variables are in our model? s=7
Gender, HSL, CC, MSC, MSE, PERF, and USE
How many variables are endogenous? p=6
HSL, CC, MSC, MSE, PERF and USE
How many variables are exogenous? q=1
Gender
Is the model recursive or non-recursive?
Recursive – no feedback loops present
Path Model Setup – Questions for the Analysis
Is the model identified?
Check the t-rule first
How many covariance terms are there in the all-variable matrix?
27∗(7+1)=28
How many model parameters are to be estimated?
12 direct paths
6 residual variances (only endogenous variables have resid. var.)
1 variance of the exogenous variable
6 endogenous variance intercepts
Not relevant for T-rule identification, but counted in R matrix
28 (total variances/covariances) > (12 + 6 + 1) parameters, thus this model is over-identified
We can use R to run analysis
Overall Hypothesized Path Model: Equation Form
The path model from can be re-expressed in the following 6 endogenous variable regression equations:
Before we interpret the estimation result, we need to assess the fit of a multivariate linear model to the data, in an absolute sense
If a model does not fit the data:
Parameter estimates may be biased
Standard errors of estimates may be biased
Inferences made from the model may be wrong
If the saturated model fit is wrong, then the LRTs will be inaccurate
Not all “good-fitting” models are useful…
…model fit just allows you to talk about your model…there may be nothing of significance (statistically or practically) in your results, though
Global measures of Model fit
Root Mean Square Error of Approximation (RMSEA)
Likelihood ratio test
User model versus the saturated model: Testing if your model fits as well as the saturated model
The saturated model versue the baseline model: Testing whether any variables have non-zero covariances (significant correlations)
User model versus baseline model
CFI (the comparative fit index)
TLI (the Tucker–Lewis index)
Log-likelihood and Information Criteria
Standardized Root Mean Square Residual (SRMR)
How far off a model’s correlations are from the saturated model correlations
Model Fit Statistics: Example
summary(model01.fit, fit.measures =TRUE)
1Model Test User Model: Standard Scaled Test Statistic 58.89658.913 Degrees of freedom 99 P-value (Chi-square) 0.0000.000 Scaling correction factor 1.000 Yuan-Bentler correction (Mplus variant) 2Model Test Baseline Model: Test statistic 619.926629.882 Degrees of freedom 2121 P-value 0.0000.000 Scaling correction factor 0.9843Root Mean Square Error of Approximation: RMSEA 0.1260.12690 Percent confidence interval - lower 0.0960.09690 Percent confidence interval - upper 0.1570.157 P-value H_0: RMSEA <=0.0500.0000.000 P-value H_0: RMSEA >=0.0800.9940.9944User Model versus Baseline Model: Comparative Fit Index (CFI) 0.9170.918 Tucker-Lewis Index (TLI) 0.8060.809 Robust Comparative Fit Index (CFI) 0.918 Robust Tucker-Lewis Index (TLI) 0.809Loglikelihood and Information Criteria: Loglikelihood user model (H0) -5889.496-5889.496 Scaling correction factor 0.965for the MLR correction Loglikelihood unrestricted model (H1) -5860.048-5860.048 Scaling correction factor 0.975for the MLR correction Akaike (AIC) 11826.99211826.992Bayesian (BIC) 11919.58311919.583 Sample-size adjusted Bayesian (SABIC) 11843.44611843.446
1
This is a likelihood ratio (deviance) test comparing our model (H0) with the saturated model – the saturated model fits much better (χ2(Δdf=9)=58.896, p < .001)
2
This is a likelihood ratio (deviance) test comparing the baseline model with the saturated model
3
The RMSEA estimate is 0.126. Good fit is considered 0.05 or less.
4
The CFI estimate is .917 and the TLI is .806. Good fit is considered 0.95 or higher.
Based on the model fit statistics, we conclude that our model does not adequately approximate the covariance matrix. Therefore, inferences from these results may be unreliable due to potentially biased standard errors and parameter estimates
Model Modification Method 1: Check Residual Covariance
Having concluded that our model fit is inadequate, we must modify the model to improve fit
These modifications are data-driven rather than theory-driven, which raises concerns about their generalizability beyond this sample
Ideally, model modifications should be guided by substantive theory
However, we can inspect the normalized residual covariance matrix (analogous to z-scores) to identify areas of greatest misfit
Several normalized residual covariance exceeds the absolute value of 1.96: MSC with USE and CC with Female
The largest normalized covariances suggest relationships that may be present that are not being modeled:
Add a direct effect between Female and CC
Add a direct effect (residual covariance) between MSC and USE
Model Modification Method 2: More Help for Fit
As we used Maximum Likelihood to estimate our model, another useful feature is that of the modification indices
Modification indices (also called Score or LaGrangian Multiplier tests) that attempt to suggest the change in the log-likelihood for adding a given model parameter (larger values indicate a better fit for adding the parameter)
lhs op rhs mi epc sepc.lv sepc.all sepc.nox
54 msc ~ use 41.517 0.299 0.299 0.275 0.275
31 use ~~ msc 41.517 70.912 70.912 0.386 0.386
46 use ~ msc 40.032 0.451 0.451 0.490 0.490
63 hsl ~ mse 6.486 1.139 1.139 10.265 10.265
65 hsl ~ cc 6.477 0.447 0.447 1.992 1.992
39 cc ~~ hsl 6.477 15.131 15.131 1.974 1.974
59 cc ~ msc 6.477 -0.568 -0.568 -1.654 -1.654
60 cc ~ female 6.477 -1.756 -1.756 -0.142 -0.298
70 female ~ cc 6.477 -0.012 -0.012 -0.145 -0.145
68 female ~ mse 6.477 -0.030 -0.030 -0.748 -0.748
The modification indices have three large values:
A direct effect predicting MSC from USE
A direct effect predicting USE from MSC
A residual covariance between USE and MSC
Note: the MI value is -2 times the change in the log-likelihood and the EPC is the expected parameter value
The MI is like a 1 DF Chi-Square Deviance test
Values greater than 3.84 are likely to be significant changes in the log-likelihood
All three modifications involve the same variables; therefore, we can only select one
Substantive theory would ideally guide this decision
As we do not know theory, we will choose to add a residual covariance between USE and MSC ( the “~~” symbol)
This acknowledges that their covariance remains unexplained by the specified model structure—not an ideal theoretical solution, but one that may enable valid inference if adequate model fit is achieved
MI = 41.517
EPC = 70.912
Model 2: New Model by adding MSE with USE
Model 2: lavaan syntax
#model 02: Add residual covariance between USE and MSC-------------------------------------model02.syntax =" #endogenous variable equationsperf ~ hsl + msc + mseuse ~ msemse ~ hsl + cc + female msc ~ mse + cc + hslcc ~ hslhsl ~ female#endogenous variable interceptsperf ~ 1use ~ 1mse ~ 1msc ~ 1cc ~ 1hsl ~ 1#endogenous variable residual variancesperf ~~ perfuse ~~ usemse ~~ msemsc ~~ msccc ~~ cc hsl ~~ hsl#endogenous variable residual covariances#none specfied in the original model so these have zeros:perf ~~ 0*use + 0*mse + 0*msc + 0*cc + 0*hsluse ~~ 0*mse + msc + 0*cc + 0*hsl #<- the changed part of syntax here (no 0* in front of msc)mse ~~ 0*msc + 0*cc + 0*hslmsc ~~ 0*cc + 0*hslcc ~~ 0*hsl"
Assessing Model fit of the Modified Model
#estimate modelmodel02.fit =sem(model02.syntax, data=dataMath, mimic ="MPLUS", estimator ="MLR")#see if model convergedinspect(model02.fit, what="converged")
[1] TRUE
#show summary of model fit statistics and parameterssummary(model02.fit, standardized=TRUE, fit.measures=TRUE)
Now we must start over with our path model decision tree
The model is identified (now 20 parameters < 28 covariances)
Estimation converged; Standard errors look acceptable
Model fit indices:
The comparison with the saturated model suggests our model fits statistically
The RMSEA is 0.049, which indicates good fit
The CFI and TLI both indicate good fit (CFI/TLI > .90)
The SRMR also indicates good fit
Therefore, we conclude the model adequately approximates the covariance matrix, allowing us to proceed with parameter interpretation. However, we first examine the residual covariances and modification indices
No modification indices are substantially large, although some exceed 3.84
These are not pursued given that our model demonstrates adequate fit, and adding these parameters may not be theoretically meaningful
More on Modification Indices
Recall from our original model that we received the following modification index values for the residual covariance between MSC and USE:
MI = 41.529
EPC = 70.912
model01.mi[2, ]
lhs op rhs mi epc sepc.lv sepc.all sepc.nox
31 use ~~ msc 41.517 70.912 70.912 0.386 0.386
The estimated residual covariance between MSC and USE in the modified model is: 70.249
parameterestimates(model02.fit) |>filter(lhs =="use"& op =="~~"& rhs =="msc")
lhs op rhs est se z pvalue ci.lower ci.upper
1 use ~~ msc 70.249 10.358 6.782 0 49.947 90.551
The difference in log-likelihoods is:
-2*(change) = 58.279
anova(model01.fit, model02.fit)
Scaled Chi-Squared Difference Test (method = "satorra.bentler.2001")
lavaan->lavTestLRT():
lavaan NOTE: The "Chisq" column contains standard test statistics, not the
robust test that should be reported per model. A robust difference test is
a function of two standard (not robust) statistics.
Df AIC BIC Chisq Chisq diff Df diff Pr(>Chisq)
model02.fit 8 11785 11881 14.827
model01.fit 9 11827 11920 58.896 58.279 1 2.275e-14 ***
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
The values given by the MI and EPC are approximations
cc use mse msc perf
0.03875216 0.04124687 0.29765620 0.48535191 0.57624994
The R2 for each endogenous variable:
CC – 0.039
USE – 0.041
MSE – 0.298
MSC – 0.485
PERF – 0.576
Note that College Experience and Perceived Usefulness both have relatively low proportions of variance explained by the model
The R² for Perceived Usefulness could have been increased by specifying a direct path from Math Self-Concept to Perceived Usefulness instead of their residual covariance
Overall Model Interpretation
High School Experience and Female are significant predictors of College Experience (βHSL,CC=.695,p=.006 and βF,CC=−1.662,p=.021)
Female students report lower College Experience than male students
Greater High School Experience is associated with increased College Experience
High School Experience, College Experience, and Gender are significant predictors of Math Self-Efficacy (βHSL,MSE=4.05,p<.001; βCC,MSE=.383,p<.001; βF,MSE=3.819,p=.001)
Greater High School and College Experience are associated with higher Math Self-Efficacy
Female students demonstrate higher Math Self-Efficacy than male students
High School Experience, College Experience, and Math Self-Efficacy are significant predictors of Math Self-Concept (βHSL,MSC=2.894,p<.001; βCC,MSC=.545,p<.001; βMSE,MSC=.692,p<.001)
Greater High School Experience, College Experience, and Math Self-Efficacy are associated with higher Math Self-Concept
Higher Math Self-Efficacy is significantly associated with increased Perceived Usefulness
Higher Math Self-Efficacy and Math Self-Concept are associated with improved Math Performance scores
Math Self-Concept and Perceived Usefulness exhibit a significant residual covariance
Wrapping Up
In this lecture we discussed the basics of path analysis
Model specification/identification
Model estimation
Model fit (necessary, but not sufficient)
Model modification and re-estimation
Final model parameter interpretation
There is a lot to the analysis – but what is important to remember is the over-arching principal of multivariate analyses: covariance between variables is important
Path models imply very specific covariance structures
The validity of the results hinge upon accurately finding an approximation to the covariance matrix