Summary and Keywords
Mediator variables are variables that lie between the cause and effect in a causal chain. In other words, mediator variables are the mechanisms through which change in one variable causes change in a subsequent variable. The single-mediator model is deceptively simple because it has only three variables: an antecedent, a mediator, and a consequent. Determining that a variable functions as a mediator is a difficult process, however, because causation can be inferred only when many strict assumptions are met, including, but not limited to, perfectly reliable measures, correct temporal design, and no omitted confounders. Since many of these assumptions are difficult to assess and rarely met in practice, the significance of a statistical test of mediation alone usually provides only weak evidence of mediation.
New methodological approaches are constantly being developed to circumvent these limitations. Specifically, new methods are being created for the following purposes: (1) to assess the impact of violating assumptions (e.g., sensitivity analyses) and (2) to make fewer assumptions and provide more flexible analysis techniques (e.g., Bayesian analysis or bootstrapping) that may be more robust to assumption violations. Despite these advances, the importance of the design of a study cannot be overstated. A statistical analysis, no matter how sophisticated, cannot redeem a study that measured the wrong variables or used an incorrect temporal design.
Overview of Current Status
Mediator variables are variables that are intermediate in the causal relation between two other variables. That is, mediator variables (the second variable in the causal chain) transmit changes in the first variable (the cause) to the third variable (the effect). Consider setting three dominoes in a row such that tipping over the first domino causes the second domino to fall, which in turn causes the third domino to fall—a chain reaction. Here the middle domino is an example of a mediator variable because when the first domino falls over, the effect is transmitted to the last domino through the middle domino. If the middle domino (mediator) was removed, then the first domino is prevented from knocking over the third domino. Hence, if mediator variables are not present or are forced to remain unchanged then the first variable, often called the independent variable, predictor, or antecedent, cannot have its entire effect on the third variable, usually referred to as the dependent variable, outcome, or consequent. This is why mediator variables are conceptualized as being the intermediate links in a causal chain between two other variables.
Mediator variables are particularly important to psychologists, who are very interested in understanding how or why two variables are related. For example, how does cognitive behavioral therapy (CBT) affect depression? One way CBT has been found to affect depression is by reducing negative thinking (Kaufman, Rohde, Seeley, Clarke, & Stice, 2005). Here negative thinking is the intermediate mediator variable between CBT and depression such that receiving CBT causes a reduction in negative thinking and the reduction in negative thinking reduces depression symptoms. The statistical identification of the mediating mechanisms (e.g., how an intervention works), called mediation analysis, may allow for a more targeted treatment or for the addition of components to a treatment to increase its effectiveness. The previous examples are not intended to insinuate that mediation analysis is limited to manipulated causes, however. Observational studies where the antecedent is not directly manipulated by the researcher are candidates for mediation analysis as well.
The two most important words in the definition for mediator variables are causal and intermediate because these properties are what differentiate mediator variables from other variables that may play a role in the relation between two other variables. For example, a confounder variable is a variable that is related to both the independent variable and the outcome, often causally, that explains all or part of the relation between these two variables but is not an intermediate link in a causal chain. A classic example of a confounder variable is the significant positive relation between violent crime and ice cream consumption that can be explained by an increase in temperature during the summer (Le Roy, 2009). Moderating variables are also often confused with mediating variables. A moderator variable is a variable that addresses the question of under what circumstances a particular relationship holds between two other variables. That is, the size and direction of the relation between the two other variables depends on the value of the moderator variable, but the moderator does not transmit changes in one variable to the other. For example, if the effect of CBT on depression was stronger for females than males (an effect that has not typically been found; Cuijpers et al., 2014), gender would moderate the effect of the intervention on depression. The distinction between a mediator, a confounder, and a moderator can also be depicted using path model diagrams as in Figure 1.
The Single-Mediator Model
At a minimum, a mediation model must include an independent variable, an outcome, and a single mediator variable that transmits all or a portion of the effect that the independent variable has on the outcome, which is called a single-mediator model. One of the first mentions of the single-mediator model is the Stimulus Organism Response model where the organism (mediator variable) determines the type of response (outcome) that the stimulus (antecedent) causes (Woodworth, 1926). Early contributions to the single-mediator model were also made by Sewall Wright (1921), who demonstrated that the effect of an antecedent on a consequent through an intermediate variable may be quantified by multiplying the path coefficient between the antecedent and intermediate variable by the path coefficient between the intermediate variable and consequent (i.e., multiplying a and b in Figure 1).
A source of confusion regarding mediator variables is that the moderator/mediator distinction is also often muddled. Though far from being the first or only treatment of mediator variables (e.g., James & Brett, 1984; Judd & Kenny, 1981), the most prominent paper in psychology discussing the single-mediator model and mediator variables in general is the Baron and Kenny (1986) paper in the Journal of Personality and Social Psychology, which to-date has been cited over 40,000 times. There are two main reasons why the Baron and Kenny paper is so highly cited. First, their definition of mediator variables and the distinction between mediator and moderator variables is very clear. Second, they describe a set of four steps that may be used to statistically test for the presence of a mediator variable using a set of three regression equations. These four steps, often referred to as the Causal Steps Test of mediation (e.g., MacKinnon, Lockwood, Hoffman, West, & Sheets, 2002), are illustrated in Figure 2.
• Step 1: Test the overall effect of the antecedent on the outcome, labeled c, for significance. The reasoning behind this step is that if c is not statistically significant, there is no effect to be mediated.
• Step 2: Assuming c is found to be significant, test the effect of the antecedent on the mediator variable, labeled a, for significance. When the antecedent variable is a directly manipulated variable, such as random assignment to a treatment and control group, the a effect is called the action theory (MacKinnon, 2008).
• Step 3: If c and a are significant, test the effect of the mediator on the outcome controlling for the antecedent, b, for significance. As directly manipulating the mediator variable is untenable in most situations, the b effect is called the conceptual theory (MacKinnon, 2008).
The testing of Step 4 requires understanding that though a mediator variable transmits the effect of the antecedent to the outcome, it is possible that the mediator variable does not transmit all of the effect. That is, there may still be some effect of the antecedent on the outcome that is not transmitted through the mediator. This potential non-transmitted effect, called the direct effect and labeled , is the effect of the antecedent on the outcome controlling for the mediator. In the event that the entire effect of the antecedent on the outcome passes through the mediator variable, called complete mediation, then there should be no direct effect and . As noted by Baron and Kenny, in psychology rarely is 100% of the variability in a variable explained, so a more likely situation is that part of the effect of the antecedent on the outcome passes through the mediator and part of the effect does not. The case where is not exactly zero is called incomplete mediation. Because the total effect is just being split into the direct effect and the effect passing through the mediator, if a mediating variable is present then the direct effect should be smaller than the total effect such that:
• Step 4: If c, a, and b are significant, then test the relation . If this step is not passed, then the relationship of the mediator variable to the other two variables is more complex than simple mediation.
If all four steps are passed, then mediation is said to occur. If any of the four steps fail, then mediation does not occur.
Beyond the Causal Steps Test
Though the Causal Steps Test is easy to understand and implement, there are several potential problems with it that have prompted revisions and new statistical tests for mediator variables to be developed. The first issue concerns the testing of the total effect in Step 1. When estimating these effects using multiple regression models where the mediator and outcome are continuous, normally distributed variables, the relation between the effects is (MacKinnon, Warsi, & Dwyer, 1995). Conceptually this makes sense because the total effect is being split into two pieces: the part of the effect that is being transmitted through the mediator, which is equal to ab and is called the mediated effect or indirect effect because the antecedent is indirectly affecting the outcome through the mediator, and the part of the effect that is not being transmitted through the mediator, the direct effect. When the mediated effect and direct effect are both in the same direction, called consistent mediation, then the total effect will be larger (i.e., farther from zero) than either of the individual effects. But when ab and have opposite signs, called inconsistent mediation, c will be closer to zero (MacKinnon, Krull, & Lockwood, 2000). If the mediated and direct effects are exactly equal in size, but have different signs, then the direct effect will be equal to zero. Hence, the Causal Steps Test will conclude there is no evidence of mediation because Step 1 will fail, no matter how large the value of the mediated effect. This has led some researchers to recommend dropping Step1 and Step 4, proceeding only with Steps 2 and 3, which has been called the Joint Significance Test (MacKinnon et al., 2002).
The second problem with the Causal Steps Test, inherent to the Joint Significant Test also, is that it does not directly test the mediated effect ab, instead testing a and b separately. Sobel (1982) found a solution by deriving the standard error of the mediated effect. The standard error can then either be divided into the mediated effect and the resulting ratio compared to a normal or t distribution to test for significance, called the First-Order Standard Error Test or Sobel Test, or used to create a symmetric confidence interval around the mediated effect. While eagerly adopted by psychologists as an alternative to the Causal Steps Test, the First-Order Standard Error Test has a fatal flaw, which is that in most circumstances the mediated effect is not normally distributed—hence the test is usually biased. Tests that do not rely on the mediated effect being normally distributed have been developed to address this issue and are discussed in the “Current Trends in Mediation” section.
A third problem with the Causal Steps Test is that it does not readily translate into more complex models. For example, if the mediator and the outcome variables are not continuous, normally distributed variables, then may no longer hold, in which case the steps no longer make as much sense. Additionally, what happens when there are multiple mediating variables in the same model? There are several ways in which the relation between multiple mediators in the same model can be conceptualized. Consider the single-mediator model from Figure 1. It is possible that there is another mediator variable that mediates the relation between the first mediator and the outcome, such that the second mediator variable acts as another link in the causal chain between the antecedent and outcome. Returning to the domino example, this would be equivalent to adding a fourth domino between the middle (the first mediator variable) and last dominoes (the outcome); this is known as a serial or sequential mediator model and is illustrated in Figure 3.
Serial mediator models are common in psychology as illustrated by Tett and Meyer (1993), who hypothesized that organizational commitment (antecedent) has a positive relationship with job satisfaction (mediator #1), which in turn has a negative relationship with intention to quit the job (mediator #2), that ultimately has a positive relationship with actually quitting the job (consequent). This combination of effects would result in organizational commitment having a negative indirect effect on quitting your job through job satisfaction and intentions to quit. Although this model does not hypothesize that organizational commitment has a direct effect on turnover, results from a meta-analysis conducted by Tett and Meyer suggest that there is a negative direct effect. Thus, although this example has a combination of positive and negative hypothesized relationships, it is an example of consistent mediation because the direct and indirect effect are both negative.
Multiple mediator variables can also be added to a model such that the antecedent simultaneously affects both mediator variables, which in turn both affect the outcome, but neither mediator variable transmits the effect of the other. For the domino example this would be equivalent to adding a fourth domino next to the existing middle domino such that the first domino knocks over both the middle dominoes simultaneously, which in turn knock down the last domino, but the middle dominoes do not knock each other over. This configuration, also illustrated in Figure 3, is known as a parallel or stacked mediator model. An example from health psychology of a parallel mediator model hypothesizes that the effect of a weight-loss intervention for obese men (independent variable) on body weight six months later (outcome) was mediated simultaneously by increases in number of steps per day (mediator #1) and decreases in unhealthy meal choices (mediator #2; Young et al., 2015). The mediator variables are parallel because while the two mediator variables may be correlated, increasing steps did not cause a change in meal choices and vice versa.
Multiple mediators increase the complexity of the mediation analysis. An idea that can still be used to cope with this additional complexity, however, is the idea of the total effect of an antecedent on a consequent being equal to the sum of the direct and indirect effects. When there are multiple mediators in a model, there are multiple indirect effects. Each of these indirect effects is unique in that it represents how the effect of the antecedent on the consequent is transmitted through a specific set of mediator variables in a specific order, which is why the effects are called specific indirect effects (Bollen, 1987) or time-specific indirect effects in autoregressive panel models (Cole & Maxwell, 2003). Using the parallel mediation model in Figure 3, there are two specific indirect effects: and . The overall or total indirect effect is equal to the sum of all of the specific indirect effects or . And finally the total effect is then equal to .
Multiple-mediators models can also contain a combination of mediator variables, some serially related, others entered in parallel. For example, Ranby et al. (2009) examined the impact of an intervention designed to decrease unhealthy body behaviors in female student athletes (independent variable) by first causing changes in knowledge, social norms, and mood management (mediators #1–3), which in turn caused changes in outcome expectancies and self-efficacy (mediators #4 and #5), which then changed intentions to use steroids and diet pills (outcome).
One of the most important assumptions of the single-mediator model is temporal precedence, which states that a cause must precede an effect in time. Thus, to make a strong argument for a variable being a mediator, the measurement of the antecedent, mediator, and consequent must reflect that the antecedent needs time to alter the mediator variable, which in turn needs time to alter the consequent. Therefore, in most cases, longitudinal data measured at a minimum of three points in time are needed to adequately assess mediator variables. Due to a variety of factors, including time and resource constraints, however, longitudinal data are used to test for mediation far too rarely. This is particularly problematic because Maxwell and Cole (2007) demonstrated that in most situations, cross-sectional data provide substantially biased estimates of the true longitudinal mediation effects. What this means is that the presence of a statistically significant indirect effect in cross-sectional data does not necessarily mean mediation has occurred. That is, not all indirect effects are mediated effects. Another reason for this is that an additional assumption of mediation analysis requires that the causal ordering of the variables is correctly specified and cross-sectional data do not inherently provide any such information regarding the direction of the causal effects. Since effects cannot occur before causes, if the mediator is measured after the antecedent, the mediator cannot cause the antecedent, though it remains the researcher’s responsibility to show that the antecedent causes the mediator variable.
Using longitudinal data does not guarantee that all potential timing issues have been correctly addressed, however, because effects do not necessarily remain constant across time. Some effects extinguish, while other effects are delayed. The domino example should make this point readily apparent because if the spacing of the dominoes, or by analogy when the variables in a mediation model are measured, is too wide, then when each domino falls, it cannot knock over the next domino in the chain. Hence, careful consideration needs to be given to the spacing of the measurements of the longitudinal data, which is part of what Collins and Graham (2002) call the temporal design of a study, to ensure that each effect is captured at its peak or the mediated effect will be underestimated.
A third assumption of mediation models is that no variables that may explain the relations between the antecedent, mediator, and outcome have been omitted from the model. Omitted variables that have a relationship with the variables in the model may result in biased estimates of the mediated effect, which can lead to incorrect substantive conclusions (e.g., incorrectly concluding that an effect is significant). Note that not controlling for previous levels of the mediator and outcome can also lead to biased estimates (Cole & Maxwell, 2003). The effects of omitted variables on the relation between the antecedent and mediator are minimized with an effective randomization of treatment. Typically, this cannot be done for the mediator, unless individuals are randomized to both the antecedent and mediator variables, which is called double randomization. Given the inability to randomly assign the mediator and the knowledge that every model is likely to have some omitted confounder of the mediator-outcome relationship, something must be done to assess the effect that this limitation is having on the parameter estimates. Sensitivity analysis, which is discussed in the “Challenges in Mediation Analysis” section, is a tool that is used to assess the effect that mediator-outcome confounding is having on the estimates.
Even if the assumptions related to longitudinal data, measurement timing, and omitted variables are met, there still may be problems. Yet another assumption of mediation models is that all of the variables are measured without error. In mediation models the effect of violating this assumption is typically that the a and b paths are underestimated, which would make it less likely that the mediated effect is significant (Hoyle & Kenny, 1999). If this was the only possible consequence of measurement error, then measurement error could only force a significant mediated effect to become nonsignificant, but in more complex models the effect of measurement error is not nearly as straightforward. Some paths may be overestimated while others are underestimated (Cole & Preacher, 2014). One way to deal with measurement error is to use latent variable models that correct the estimates of the mediated effect for measurement error.
Current Trends in Mediation
Current trends in mediation may be divided into three subsections: (1) computationally intensive approaches that have only been possible with the increase in computational power of personal computers, (2) more complex statistical models that allow for longitudinal data, nesting of data, and categorical variables, and (3) mediation analyses conducted in a causal inference or hypothetical outcomes framework.
Computationally Intensive Approaches
When conducting a mediation analysis, decisions must be made regarding whether the direct and indirect effects are farther from zero than expected by chance, which is typically done using a combination of statistical significance testing (i.e., p-values or confidence intervals) and effect sizes. As discussed previously, neither the Causal Steps Test or the Joint Significant Test directly tests the mediated effect ab for significance, instead testing the individual effects a and b separately. The First-Order Standard Error (i.e., Sobel) Test does directly test the mediated effect ab, but this test assumes that the sampling distribution of ab is normal, which is typically not the case (MacKinnon et al., 2002). As a result of this shortcoming, computationally intensive approaches that do not rely on normality assumptions have been developed. Many computationally intensive tests of mediation have been developed: three of the most used approaches are described here: the distribution of the product, bootstrapping, and the use of Bayesian methods. In general, any of these three approaches should be used instead of the Causal Steps, Joint Significance, or First-Order Standard Error Tests because they provide more accurate statistical tests and confidence intervals.
Distribution of the Product
The major issue with the First-Order Standard Error Test is that it assumes the mediated effect is normally distributed. Hence, the symmetric confidence intervals created using the first-order standard error are incorrect. But if the actual distribution of the mediated effect was known, it could be used to create correct confidence intervals. The problem is that the distribution of the mediated effect changes shape depending on the values of a, b, and their respective standard errors. MacKinnon and colleagues (MacKinnon, Fritz, Williams, & Lockwood, 2007; MacKinnon, Lockwood, & Williams, 2004; MacKinnon et al., 2002) addressed this issue by creating a program, PRODCLIN, that estimates correct confidence intervals based on the actual distribution of the mediated effect for any values of a and b; this is called the Distribution of the Product Test. This work was expanded as a package in the statistical program R by Tofighi and MacKinnon (2011). Due to the distribution of the mediated effect often being skewed, the confidence intervals created by the Distribution of the Product Test are usually asymmetric, meaning that the confidence interval is not centered on the value of ab, but these confidence intervals are interpreted in the same way as symmetric confidence intervals.
Another approach to testing the mediated effect that many researchers recommend (e.g., Bollen & Stine, 1990; Lockwood & MacKinnon, 1998; MacKinnon et al., 2004; Preacher & Hayes, 2008; Shrout & Bolger, 2002) that does not rely on the normal distribution is bootstrapping. Bootstrapping involves taking repeated samples, called bootstrap samples, from the original sample with replacement. For example, consider having a sample of 100 participants from a study—this is the original sample. A bootstrap sample is then created by randomly selecting a sample of 100 cases from this original sample. If the 100 cases in the bootstrap sample were selected without replacing any of the selected cases, the bootstrap sample would be identical to the original sample. If instead each case is replaced after it is selected, then any specific case can be selected multiple times for that bootstrap sample—it is even possible to end up with a bootstrap sample that was made up of all the same case! In general, however, the bootstrap sample will end up with some cases selected from the original sample multiple times and others not selected at all. The mediated effect is then estimated using the data in the bootstrap sample, resulting in a bootstrap estimate of the mediated effect.
If a large number of bootstrap samples were taken from the original sample, say 1,000, and then the mediated effect is estimated in each of these bootstrap samples, the result would be 1,000 bootstrap estimates of the mediated effect. If these 1,000 bootstrap estimates of the mediated effect were then sorted from smallest to largest, they would create an empirical distribution of the mediated effect (i.e., one based solely on the data) that would closely approximate the actual non-normal distribution of the mediated effect for those specific values of a and b. A 95% confidence interval can then be calculated by finding the bootstrap estimates of the mediated effect that correspond to the 2.5 and 97.5 percentiles of the empirical distribution. The bootstrap confidence interval could then be used to test for statistical significance as with any other confidence interval; this is called the Percentile Bootstrap Test. Similar to the Distribution of the Product Test, the Percentile Bootstrap Test will often produce asymmetric confidence intervals. This makes sense, because conceptually these two approaches are doing exactly the same thing. The only difference is that the Distribution of the Product Test is using the actual mathematical distribution to create the confidence intervals, while the Percentile Bootstrap Test is using an empirical distribution based on the data. The Distribution of the Product Test might seem like it is better because it uses the actual distribution, but in more complex mediation models, the computational requirements to find the actual distribution become too difficult to use in practice, while the Percentile Bootstrap Test often works quite well with complex models.
There are several variations on the Percentile Bootstrap Test that are worth mentioning. The Bias-Corrected Bootstrap Test includes a correction for potential bias due to skew in the empirical distribution caused by resampling the original data and the Accelerated Bias-Corrected Bootstrap Test includes corrections for both skew and kurtosis. In general, both versions of the Bias-Corrected Bootstrap Test perform the same in terms of statistical power and Type I error rate, but recent studies (e.g., Fritz, Taylor, & MacKinnon, 2012) have shown the Bias-Corrected Bootstrap Test has Type I error rates that are too high in many cases, leading researchers to recommend the Percentile Bootstrap Test.
The third computationally intensive approach to mediation analysis discussed here is Bayesian statistics. Bayesian approaches to mediation analysis are different from the null hypothesis significance testing most psychologists are trained in, which collectively are known as frequentist statistics, because Bayesian statistics do not rely on the idea of sampling error or a p-value. Instead, Bayesian methods for mediation analysis allow the researcher to provide some information about the mediated effect in the population before estimating the effect in the sample using what is known as an informative prior distribution. If the research is being conducted in an area where little prior information is available, uninformative priors may be used that allow the researcher to maintain the advantages of Bayesian analysis without having to supply an informed prior. While the idea of providing information about the mediated effect before conducting any analyses may seem backwards, Kaplan (2014) notes that the theories and hypotheses that are supposed to be driving the data collection should provide a considerable amount of information about the expected size and direction of the mediated effect. The information provided in the informative prior is then combined with the observed data and new, better estimates of the mediated effect are created in what is known as the posterior distribution. Unlike in frequentist statistics, no significance test is performed. Instead, the value of the mediated effect in the posterior distribution must be evaluated based solely on the size of the effect and the variability associated with the effect. It should be noted that methodologists have been recommending for years even with frequentist statistics (e.g., Wilkinson & APA Task Force on Statistical Inference, 1999), though many psychologists continue to prefer p-values despite the many flaws inherent in them.
Bayesian statistics offer several other advantages for mediation analysis. First, unlike a 95% confidence interval created using frequentist methods, a 95% confidence interval created using Bayesian statistics can actually be interpreted as there being a 95% likelihood that the true value of the mediated effect is within that interval. Second, the more complex the mediation model, the larger the number of parameters that have to be estimated, which usually results in the need for a larger sample size. Bayesian estimation is a tool that may alleviate some of the sample size burden; this is achieved through the incorporation of pre-existing substantive knowledge into the prior distribution as well as the use of exact inference. Exact inference simply means that inferences do not rely on large-sample theory, which makes them more ideal for small sample sizes (Yuan & MacKinnon, 2009). Examples from the methodological literature demonstrating the advantages of Bayesian estimation with mediation models are seen in Yuan and MacKinnon for single-level regression as well as multilevel analyses and for multilevel structural equation models (SEM; Hox, van de Schoot, & Matthijsse, 2012).
More Complex Designs
Categorical Mediators and Distal Outcomes
Categorical mediator variables and outcomes expand the typical regression models by introducing data that are not normally distributed. For example, Wyszynski, Bricker, and Comstock (2011) found the effect of a parent’s smoking cessation (antecedent) on a child’s subsequent smoking behavior as a senior in high school (categorical outcome) was mediated by negative attitudes toward smoking and tobacco refusal self-efficacy (continuous mediator variable). Any combination of continuous/categorical mediator and outcome variables is possible, however—categorical mediator with continuous outcome, continuous mediator with categorical outcome, and both categorical. If viewed through the lens of statistical analysis, the domino example contains a dichotomous mediator as well as a dichotomous outcome because the dominoes may take one of two values (i.e., remain standing or fall over).
The rationale behind analyzing these types of mediating variables is the same as when continuous mediators are being analyzed, but there are some complications. First, a different type of regression must be used when the mediator or the outcome is dichotomous, generally either probit or logistic regression. Second, because the error variances for probit and logistic regression are not estimated, they are fixed, does not always equal ab because c and do not necessarily have the same scale, (MacKinnon & Dwyer, 1993; MacKinnon, Lockwood, Brown, Wang, & Hoffman, 2007). Categorical data in this section are limited to either dichotomous mediators or outcomes. Although not discussed here, count data are another type of data that are often encountered in practice and may be handled in mediation analysis.
To this point in the discussion, it has been assumed that the data from each participant are independent. Many times in psychology, however, this is not the case, with participants being nested within some structure, such as children nested within classrooms or patients nested within clinics. In these cases, a patient from a given clinic is likely to have more in common with another patient from the same clinic than a patient from a different clinic. Ignoring this non-independence usually results in inflated Type I error rates (Bovaird, 2007), so a different family of models, which go by several names including multilevel models and hierarchical linear models, is need to correct for this dependence.
In addition to controlling for clustering, multilevel mediation models allow for some substantively interesting research questions. Specifically, any predictor that is measured for each person (e.g., a treatment administered to a patient) can have an effect that varies depending upon the cluster (e.g., the clinic) in which the treatment was delivered. The term multilevel is used because there is variability between individuals (Level 1) and variability between clusters (Level 2) in the same statistical model. Several frameworks exist to study multilevel mediation, but the approach described by Bauer, Preacher, and Gill (2006) is discussed here. In multilevel mediation, the levels of the variables are typically denoted as antecedent level-mediator level- outcome level (Krull & MacKinnon, 2001). Therefore, a 1-1-1 multilevel mediation model would indicate that the antecedent, mediator, and outcome were all person-level variables, not cluster-level variables, while a 2-1-1 multilevel mediation model would mean that the antecedent is measured at the cluster level, which predicts the person-level mediator and outcome. For example, Krull and MacKinnon used a 2-1-1 multilevel mediation model to investigate whether randomly assigning high school football teams to receive an intervention to reduce steroid use or to a control group (cluster-level antecedent) reduced the student’s perceived tolerance of steroid use by peers and coaches (person-level mediator), which in turn reduced the student’s intentions to use steroids (person-level outcome variable).
Conditional Process Analysis
Although not immediately obvious, in a 2-1-1 multilevel mediation model, both the antecedent and the mediator can have effects that differ between clusters. In the steroid example, it is possible that the effect of perceived tolerance is not the same for all of the football teams (i.e., moderation). Combining mediation and moderation into a single analysis, whether multilevel or single-level, is called a conditional process model (Hayes, 2013) where process refers to the process of mediation and conditional reflects the differential effect of moderation. The term conditional process model was coined to replace the often-used terms moderated mediation or mediated moderation (Muller, Judd, & Yzerbyt, 2005) because of the confusion that surrounds these terms given that researchers have used both terms to describe identical models and both fall within the conditional process model framework. In a conditional process model there is at least one mediator variable and at least one of the paths that make up the mediated effect ab is moderated by some fourth variable. Consider the evolution of Fishbein and Ajzen’s (1975) Theory of Reasoned Action (TRA) into the Theory of Planned Behavior (TPB; Ajzen, 1991). TRA is a mediation model where the effects of attitudes and subjective norms (both antecedents) on behavior (outcome) are mediated by intentions. Subsequently TRA was expanded to include perceived behavioral control, creating the TPB such that perceived behavioral control is hypothesized to moderate the relationship between intentions and behavior; specifically, intentions are expected to predict behavior better when an individual has higher behavioral control.
As described previously, mediation makes an assumption of temporal precedence that requires longitudinal data. When each variable is repeatedly measured across time, an individual’s score at the first measurement occasion is likely to be correlated to some degree with his or her score at one or more subsequent measurement occasions, and the non-independence of repeated observations must be accounted for by using statistical models specifically designed for longitudinal data. The first longitudinal model that is discussed is a variation of the regression models that have been discussed previously. If every variable in a single-mediator model is measured at least twice, then the difference between each pair of consecutive scores, called change scores or difference scores, can be computed. If these change scores are then substituted into the regression equations that are provided in Figure 2, the result is a change score mediation model. This model works well for situations where the level of a variable is expected to change between measurements, but this model has several limitations, including the difference score being able to take into account only two repeated-measurements at a time and the fact that the observed difference score is often (Cronbach & Furby, 1970), but not always (MacKinnon, 2008), unreliable.
The second limitation can be addressed through the use of latent change score models (LCS; Ferrer & McArdle, 2003), which represent the change between two occasions as a latent variable that is free from measurement error. The mediation version of the LCS, described by MacKinnon (2008), includes a set of LCS models: one for the antecedent, one for the consequent, and one for the mediator. There are several ways in which the mediated effect can be set up with LCS models, including having the observed values of each variable predicting the latent change scores for the subsequent variables in the causal chain, having the latent change scores of each variable predicting the latent change scores for the subsequent variables, or even including a latent variable that measures the change in the latent change scores (i.e., second-order change).
An alternative to change scores are autoregressive effects, which occur when a variable’s value at one point in time is predicted from one or more previous measurements of the same variable. Though the change score model allows prediction of the change in a variable’s value between two successive time points, autoregressive models provide for additional flexibility because how many lags, that is, the number of previous measurements, to include in the model can be varied. For example, a first-order autoregressive model, AR(1), regresses each measurement on only the previous measurement, while a second-order model, AR(2), regresses each measurement on the previous two measurements (Selig & Preacher, 2009). In the context of mediation, autoregressive models are extremely useful because they can specify not just the autoregressive lags, but also the lags of the mediation effects a, b, and (Cole & Maxwell, 2003), allowing for the investigation of many possible time-specific indirect effects. For example, in the autoregressive mediation model in Figure 4 there are five specific indirect effects between the first measurement of the antecedent and the fourth measurement of the outcome. The overall mediated effect of the first measurement of the antecedent on the fourth measurement of the outcome can also be examined.
The change score model examines changes between pairs of successive measurements, while the autoregressive model predicts the value of a variable at one measurement from the same and other variables measured at previous points in time. But what if the goal is to describe the change in the values of a variable across all of the repeated measurements, called a trajectory? Growth curve models do just that using either a multilevel or a latent variable framework. Growth curve models not only allow the shape of the trajectory to take many forms (e.g., linear, quadratic, exponential decay; Fritz, 2014), but also allow the initial value and rate of growth to vary between individuals. Cheong, Mackinnon, and Khoo (2003) describe a latent growth curve mediation model where a dichotomous independent variable changes the trajectory of the mediator across time, which in turn changes the trajectory of the outcome variable.
Because growth curves must be fit independently to each of the continuous variables in the mediation model, the timing of the repeated measurements for each variable must be considered. If all variables are measured across the same time span, then the growth processes are occurring simultaneously, resulting in a parallel process mediation model. If instead the measurements of the variables are offset to reflect the causal ordering of the variables, then this is known as a sequential process mediation model. In parallel process mediation models, it is important to consider that some indirect effects may not be logical. For example, the change in the mediator variable should not be hypothesized to mediate the effect of treatment on the initial status of the consequent. This analysis would be using future data to predict the past when using the change in mediator to predict the starting point of the consequent (Selig & Preacher, 2009).
Causal mediation analysis is presented in a separate section not because it is new, but because it comes from a seemingly very different perspective from the other models discussed here. Though the discussion of these methods can be complex and intimidating, as with Bayesian methods, the causal methods provide many benefits over traditional methods for mediation analysis (e.g., better estimates of mediated effects, stronger evidence for causality, work better with nonlinear models such as logistic regression), so they warrant a brief overview. Causal mediation analyses come from a potential outcomes perspective (Rubin, 1974). The individual causal effect of a treatment, which may be thought of as an antecedent with two levels (i.e., for the control and for the treatment condition), is defined as the difference in the potential score on an outcome variable Y between what would have happened if participant i received the treatment, , compared to the potential score on the outcome if participant i did not receive the treatment, . That is, the individual causal effect of the treatment X for individual i is equal to .
The problem with the individual causal effect, however, is that both of these potential outcome scores cannot be observed because participant i cannot simultaneously be in the treatment and control conditions. Suppose participant i was assigned to the treatment condition—then could be observed, which is called the factual. Because participant i cannot also be assigned to the control, cannot be observed, which is called the counterfactual (Splawa-Neyman, 1990). Thus, the individual causal effect can never be directly calculated, which is the Fundamental Problem of Causal Inference (Holland, 1986). Instead average causal effects are used, which are based on the assumption that randomly assigning participants to the treatment conditions creates equivalent groups of individuals. The average causal effect of the treatment X is the difference in the mean outcome scores for the participants who receive the different treatments (i.e., where E is used to denote the expected population mean) and provides an unbiased estimate of the individual causal effect. While the average causal effect can provide quite a bit of information about the causal relation between the antecedent and the mediator, it does not provide much information about the causal relation between the mediator and the outcome because often it is impossible, either physically or ethically, to randomly assign individuals to levels of the mediator. Causal mediation methods attempt to address this shortcoming of the traditional mediation methods.
Unlike traditional methods, causal mediation analysis has a variety of effects, both direct and indirect (MacKinnon, Valente, & Wurpts, 2015; Muthén & Asparouhov, 2015; Pearl, 2009). The controlled direct effect (CDE) is the effect of the treatment on the outcome when the mediator is held constant at some value m and is equivalent to the direct effect c´ from traditional mediation analysis when the data are continuous and normally distributed. The CDE is solely a direct effect because the meditator is held constant. There is also the pure natural direct effect (PNDE) and total natural direct effect (TNDE). The PNDE is the effect the treatment has on the outcome when the mediator is allowed to vary as it would in the control condition. This may be thought of as the effect that the treatment would have if the mediator was maintained at the level it would have taken in the absence of the treatment. The TNDE is the effect of treatment on the outcome when the mediator is allowed to vary as it would in the treatment condition. There are also multiple indirect effects. The total natural indirect effect (TNIE) and the pure natural indirect effect (PNIE) are similar to the TNDE and PNDE because the TNIE is the indirect effect that the treatment has on the outcome through the mediator when the direct effect is allowed to vary as it would in the treatment condition. The PNIE is conceptually similar except that the treatment effect is allowed to vary as it would in the control condition. The total effect may then be computed as the sum of the PNDE and TNIE or the sum of the TNDE and the PNIE.
Challenges in Mediation Analysis
By definition, mediator variables are part of a causal chain. From their undergraduate methods and statistics courses onward, psychologists have learned that correlation does not equal causation and causal effects can only be measured with randomized experiments. Though the truth of the situation is much more nuanced than these maxims allow, this does present a serious challenge to psychologists who test mediation hypotheses. For example, in a cross-sectional strictly observational study, statistically significant tests of mediation are likely to provide only very weak evidence of mediation occurring. Randomly assigning individuals to levels of the antecedent provides stronger evidence because it should provide a good estimate of the causal effect of the antecedent on the mediator, but unless individuals can also be randomly assigned to levels of the mediator variable, the relation between the mediator and the outcome is still strictly correlations. Using longitudinal data may provide still stronger, but again not conclusive, evidence. Then what can be done? The key to making a strong claim for mediation lies in the testing of the assumptions of the mediation model and creating a preponderance of evidence rather than relying on a single statistical test (MacKinnon, 2008).
One of the basic assumptions of all research in psychology is that valid, reliable measures of the variables are being used, but few variables in psychology are completely reliable. Unreliable measures of mediator variables almost always result in biased estimates of the mediated effect in single-mediator models, with the bias in many of these situations resulting in the estimates being too small (Fritz, Kenny, & MacKinnon, 2016). Moving to more complex models with multiple mediator variables results in even more concern about the estimates, not because the bias is necessarily worse, but because the combined effect of unreliability in each of the individual variables in the model on the estimate of the mediated effect is difficult to predict. That is, the estimate of the mediated effect could be too big, too small, opposite in sign, or even unbiased depending on the pattern of relations between variables, the levels of unreliability, and the specific model. Latent variable models can correct for much of the measurement error in mediation models, but a researcher’s evidence for mediation may be limited by the reliability of their measures. Hence, psychologists interested in finding evidence for mediation need to stop relying on measures whose reliability is just good enough.
Another unavoidable issue concerns the use of longitudinal data. While cross-sectional data do not meet the assumption of temporal precedence, the use of longitudinal data does not guarantee temporal precedence is met either (Mitchell & Maxwell, 2013). Remember that the timing and spacing of the measurements must be accurate in order to allow the variables to have effects on one another and for those effects to be captured at their peak (Fritz & MacKinnon, 2012). Decisions regarding when and how often to measure the variables in a mediation model must be made largely based on theory. For example, what is the expected pattern of change over time? If the theory includes personality traits that do not change over time or the relation between the antecedent and the mediator is expected to be linear, then many time points may not be necessary. A theory about personality states or nonlinear relationships, such as exponential decay, likely requires many time points spaced in a manner that captures the change over time accurately. Additionally, an incorrect measurement interval or only having a few measurements may lead to incorrect decisions regarding the pattern of change over time or the presence of an effect at all. An optimal temporal design therefore requires researchers to make more specific hypotheses regarding how individual variables change over time before making hypotheses about longitudinal relations between variables, especially ones that involve mediator variables. If a researcher does not have enough information to include the temporal design in the mediation hypothesis, then more work is needed prior to conducting the study.
Even if perfectly reliable measures and longitudinal data that are measured at exactly the correct intervals are used, there are still some assumptions of mediation models that are simply not going to be met in a majority of psychology studies. Consider the assumption discussed previously that no variables, such as confounders or additional mediators, are omitted from the model, known as sequential ignorability. The omitted variable assumption could be accounted for by simply measuring every variable related to the antecedent, mediator, and outcome variable, and then including them into the model. But it is practically impossible to measure and include every relevant variable, even if they could be identified! This becomes especially true when one considers that the mediation model being tested is almost guaranteed to be just one small piece of a much larger process. Left with no reasonable way to avoid violating this assumption, the effect that violating this assumption has on the mediated effect must be estimated, which is known as sensitivity analysis (Cox, Kisbu-Sakarya, Miočević, & MacKinnon, 2014).
While numerous authors have presented methods for conducting a sensitivity analysis (e.g., Imai, Keele, & Tingley, 2010), Liu, Kuramoto, and Stuart (2013) state that all sensitivity analyses come from one of two traditions. Sensitivity analyses that come from a classic statistical tradition attempt to determine how large the effect of an omitted variable is needed for the effect of the mediator variable to be exactly zero, or at least no longer statistically significant. Those that come from the epidemiological tradition, however, attempt to determine how much of the mediated effect can be explained by other related variables. A sensitivity analysis that found only a very weakly related variable would need to be omitted for the effect of the mediator variable to become nonsignificant suggests that the results provide poor evidence for mediation. In contrast, a sensitivity analysis that found the effect of the mediator variable persists unless an improbably or even impossibly important variable is omitted provides strong evidence for mediation. Therefore, a mediation analysis without a sensitivity analysis is incomplete and the reporting of sensitivity analyses along with statistical tests of mediation should be standard in psychology.
Multiple Studies and Multiple Designs
Finally, just as mediation is a process, establishing that a variable is a mediator is a process. While exactly replicating a mediation study may seem like a waste of time to many busy psychologists, given the recent concerns regarding the nonreplicability of published studies in psychology (e.g., Open Science Collaboration, 2015), exact replications are no sure thing and can provide much needed evidence for a variable being a mediator. Conceptual replications that vary specific contextual factors or measures of variables are likely to provide a wealth of information as well, particularly for conditional process models. But moving beyond replications, multiple studies that examine the different pieces of the mediation model are more likely to provide the necessary strong evidence for a variable being a mediator. Consider a series of studies conducted on the same variables. In the first study, a preliminary cross-sectional observational study is conducted to determine if the proposed antecedent, mediator, and outcome variables are related. In the second study, the author measures these same variables multiple times to determine if there is a longitudinal relation between the variables. In a third study, the researcher manipulates the antecedent variable, randomly assigning individuals to levels of the antecedent to determine if changing the antecedent changes the mediator and outcome. In a fourth study, participants are randomly assigned to levels of the mediator to determine if changing the mediator changes the outcome variable. Taken together, the findings from these four studies provide much stronger evidence of mediation than any of these studies on their own.
Mediator variables are one of the most frequently hypothesized variable types in psychology because of their scientific value in exploring and explaining relations between variables. Although a plethora of research on mediator variables has been conducted since Baron and Kenny (1986), mediation research is by no means a stagnant field without methodological innovation. Developments in methods to assess mediating variables are developed in many fields (e.g., psychology, epidemiology, statistics, education, and computer science). The challenge for psychologists today is staying abreast of these developments that are published in a wide array of methodological and applied statistical journals, which may be difficult to access for psychologists. Luckily, the Internet has improved access to new developments as well as a host of Web-based tools for conducting and interpreting mediation analyses.
This research was supported in part by a grant from the National Institute on Drug Abuse (DA 009757).
This list includes books and articles that will provide a more thorough introduction to past and current trends in assessing mediator variables. Five books are: Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach (Hayes, 2013), Causal Inference (Hernán & Robins, 2010), Causal Inference in Statistics: A Primer (Pearl, Glymour, & Jewell, 2016), Introduction to Statistical Mediation Analysis (MacKinnon, 2008), and Explanation in Causal Inference: Methods for Mediation and Moderation (VanderWeele, 2015). Five articles are: James and Brett (1984), Baron and Kenny (1986), MacKinnon Lockwood, Hoffman, West, and Sheets (2002), Holland (1986), and Muthén and Asparouhov (2015).
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211.Find this resource:
Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual, strategic and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182.Find this resource:
Bauer, D. J., Preacher, K. J., & Gil, K. M. (2006). Conceptualizing and testing random indirect effects and moderated mediation in multilevel models: New procedures and recommendations. Psychological Methods, 11, 142–163.Find this resource:
Bollen, K. A. (1987). Total, direct, and indirect effects in structural equation models. Sociological Methodology, 17, 37–69.Find this resource:
Bollen, K. A., & Stine, R. (1990). Direct and indirect effects: Classical and bootstrap estimates of variability. Sociological Methodology, 20, 115–140.Find this resource:
Bovaird, J. A. (2007). Multilevel structural equation models for contextual factors. In T. D. Little, J. A. Bovaird, & N. A. Wainer (Eds.), Modeling contextual effects in longitudinal studies (pp. 151–182). Mahwah, NJ: Lawrence Erlbaum Associates.Find this resource:
Cheong, J., MacKinnon, D. P., & Khoo, S. T. (2003). Investigation of mediational processes using parallel process latent growth curve modeling. Structural Equation Modeling, 10, 238–262.Find this resource:
Cole, D. A., & Maxwell, S. E. (2003). Testing mediational models with longitudinal data: Questions and tips in the use of structural equation modeling. Journal of Abnormal Psychology, 112, 558–577.Find this resource:
Cole, D. A., & Preacher, K. J. (2014). Manifest variable path analysis: Potentially serious and misleading consequences due to uncorrected measurement error. Psychological Methods, 19, 300–315.Find this resource:
Collins, L. M., & Graham, J. W. (2002). The effect of the timing and spacing of observations in longitudinal studies of tobacco and other drug use: Temporal design considerations. Drug and Alcohol Dependence, 68, 85–96.Find this resource:
Cox, M. G., Kisbu-Sakarya, Y., Miočević, M., & MacKinnon, D. P. (2014). Sensitivity plots for confounder bias in the single mediator model. Evaluation Review, 37, 405–431.Find this resource:
Cronbach, L. J., & Furby, L. (1970). How we should measure “change”: Or should we? Psychological Bulletin, 74, 68.Find this resource:
Cuijpers, P., Weitz, E., Twisk, J., Kuehner, C., Cristea, I., David, D., et al. (2014). Gender as predictor and moderator of outcome in cognitive behavior therapy and pharmacotherapy for adult depression: An “individual patient data” meta-analysis. Depression and Anxiety, 31, 941–951.Find this resource:
Ferrer, E., & McArdle, J. J. (2003). Alternative structural models for multivariate longitudinal data analysis. Structural Equation Modeling, 10, 493–524.Find this resource:
Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention, and behavior: An introduction to theory and research. Reading, MA: Addison-Wesley.Find this resource:
Fritz, M. S. (2014). An exponential decay model for mediation. Prevention Science, 15, 611–622.Find this resource:
Fritz, M. S., Kenny, D. A., & MacKinnon, D. P. (2016). The combined effects of measurement error and omitting confounders in the single-mediator model. Multivariate Behavioral Research, 51, 681–697.Find this resource:
Fritz, M. S., & MacKinnon, D. P. (2012). Mediation models for developmental data. In B. Laursen, T. Little, & N. Card (Eds.), Handbook of developmental research methods (pp. 291–310). New York: Guilford Press.Find this resource:
Fritz, M. S., Taylor, A. B., & MacKinnon, D. P. (2012). Explanation of two anomalous results in statistical mediation analysis. Multivariate Behavioral Research, 47, 61–87.Find this resource:
Hayes, A. F. (2013). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. New York: Guilford Press.Find this resource:
Hernán, M. A., & Robins, J. M. (2010). Causal inference. Boca Raton, FL: CRC.Find this resource:
Holland, P. W. (1986). Statistics and causal inference. Journal of the American Statistical Association, 81, 945–960.Find this resource:
Hox, J., van de Schoot, R., & Matthijsse, S. (2012). How few countries will do? Comparative survey analysis from a Bayesian perspective. Survey Research Methods, 6, 87–93.Find this resource:
Hoyle, R. H., & Kenny, D. A. (1999). Sample size, reliability, and tests of statistical mediation. In R. H. Hoyle (Ed.), Statistical strategies for small sample research (pp. 196–222). Thousand Oaks, CA: SAGE.Find this resource:
Imai, K., Keele, L., & Tingley, D. (2010). A general approach to causal mediation analysis. Psychological Methods, 15, 309–334.Find this resource:
James, L. R., & Brett, J. M. (1984). Mediators, moderators and tests for mediation. Journal of Applied Psychology, 69, 307–321.Find this resource:
Judd, C. M., & Kenny, D. A. (1981). Process analysis: Estimating mediation in treatment evaluations. Evaluation Review, 5, 602–619.Find this resource:
Kaplan, D. (2014). Bayesian statistics for the social sciences. New York: Guilford Press.Find this resource:
Kaufman, N. K., Rohde, P., Seeley, J. R., Clarke, G. N., & Stice, E. (2005). Potential mediators of cognitive-behavioral therapy for adolescents with comorbid major depression and conduct disorder. Journal of Consulting and Clinical Psychology, 73, 38–46.Find this resource:
Krull, J. L., & MacKinnon, D. P. (2001). Multilevel modeling of individual and group level mediated effects. Multivariate Behavioral Research, 36, 249–277.Find this resource:
Le Roy, M. (2009). Research methods in political science: An introduction using MicroCase® (7th ed.). Boston: Cengage Learning.Find this resource:
Li, Y., Schneider, J. A., & Bennett, D. A. (2007). Estimation of the mediation effect with a binary mediator. Statistics in Medicine, 26, 3398–3414.Find this resource:
Liu, W., Kuramoto, S. J., & Stuart, E. A. (2013). An introduction to sensitivity analysis for unobserved confounding in non-experimental prevention research. Prevention Science, 14, 570–580.Find this resource:
Lockwood, C. M., & MacKinnon, D. P. (1998). Bootstrapping the standard error of the mediated effect. Proceedings of the Twenty-Third Annual SAS Users Group International Conference (pp. 997–1002). Cary, NC: SAS Institute.Find this resource:
MacKinnon, D. P. (2008). Introduction to statistical mediation analysis. New York: Erlbaum.Find this resource:
MacKinnon, D. P., & Dwyer, J. H. (1993). Estimating mediated effects in prevention studies. Evaluation Review, 17, 144–158.Find this resource:
MacKinnon, D. P., Fritz, M. S., Williams, J., & Lockwood, C. M. (2007). Distribution of the product confidence limits for the indirect effect: Program PRODCLIN. Behavior Research Methods, 39, 384–389.Find this resource:
MacKinnon, D. P., Krull, J. L., & Lockwood, C. M. (2000). Equivalence of the mediation, confounding and suppression effect. Prevention Science, 1, 173–181.Find this resource:
MacKinnon, D. P., Lockwood, C. M., Brown, C. H., Wang, W., & Hoffman, J. M. (2007). The intermediate endpoint effect in logistic and probit regression. Clinical Trials, 4, 499–513.Find this resource:
MacKinnon, D. P., Lockwood, C. M., Hoffman, J., West, S., & Sheets, V. (2002). A comparison of methods to test mediated and other intervening variable effects. Psychological Methods, 7, 83–104.Find this resource:
MacKinnon, D. P., Lockwood, C. M., & Williams, J. (2004). Confidence limits for the indirect effect: Distribution of the product and resampling methods. Multivariate Behavioral Research, 39, 99–128.Find this resource:
MacKinnon, D. P., Valente, M. J., & Wurpts, I. C. (2015, October). The centrality of the intervention by mediator interaction in causal mediation analysis. Paper presented at the annual meeting of the Society for Multivariate Experimental Psychology, Redondo Beach, CA.Find this resource:
MacKinnon, D. P., Warsi, G., & Dwyer, J. H. (1995). A simulation study of mediated effect measures. Multivariate Behavioral Research, 30, 41–62.Find this resource:
Maxwell, S. E., & Cole, D. A. (2007). Bias in cross-sectional analyses of longitudinal mediation. Psychological Methods, 12, 23–44.Find this resource:
Mitchell, M. A., & Maxwell, S. E. (2013). A comparison of the cross-sectional and sequential designs when assessing longitudinal mediation. Multivariate Behavioral Research, 48, 301–339.Find this resource:
Muller, D., Judd, C. M., & Yzerbyt, V. Y. (2005). When moderation is mediated and mediation is moderated. Journal of Personality and Social Psychology, 89, 852–863.Find this resource:
Muthén, B., & Asparouhov, T. (2015). Causal effects in mediation modeling: An introduction with applications to latent variables. Structural Equation Modeling, 22, 12–23.Find this resource:
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349 (6251), 943.Find this resource:
Pearl, J. (2009). Causality (2d ed.). Cambridge, U.K.: Cambridge University Press.Find this resource:
Pearl, J., Glymour, M., & Jewell, N. P. (2016). Causal inference in statistics: A primer. New York: Wiley.Find this resource:
Preacher, K. J., & Hayes, A. F. (2008). Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behavior Research Methods, 40, 879–891.Find this resource:
Ranby, K. W., Aiken, L. S., MacKinnon, D. P., Elliot, D. L., Moe, E. L., McGinnis, W., & Goldberg, L. (2009). A mediation analysis of the ATHENA intervention for female athletes: Prevention of athletic-enhancing substance use and unhealthy weight loss behaviors. Journal of Pediatric Psychology, 34, 1069–1083.Find this resource:
Rubin, D. B. (1974). Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of Educational Psychology, 66, 688–701.Find this resource:
Selig, J. P., & Preacher, K. J. (2009). Mediation models for longitudinal data in developmental research. Research in Human Development, 6, 144–164.Find this resource:
Shrout, P. E., & Bolger, N. (2002). Mediation in experimental and nonexperimental studies: New procedures and recommendations. Psychological Methods, 7, 422–445.Find this resource:
Sobel, M. E. (1982). Asymptotic confidence intervals for indirect effects in structural equation models. In S. Leinhardt (Ed.), Sociological methodology (pp. 290–312). Washington, DC: American Sociological Association.Find this resource:
Splawa-Neyman, J. (1990). On the application of probability theory to agricultural experiments. Essays on Principles. Section 9. Statistical Science, 5, 465–472. Originally published in Polish in 1923.Find this resource:
Tett, R. P., & Meyer, J. P. (1993). Job satisfaction, organizational commitment, turnover intention, and turnover: Path analyses based on meta-analytic findings. Personnel Psychology, 46, 259–293.Find this resource:
Tofighi, D., & MacKinnon D. P. (2011). RMediation: An R package for mediation analysis confidence intervals. Behavior Research Methods, 43, 692–700.Find this resource:
VanderWeele, T. J. (2015). Explanation in causal inference: Methods for mediation and moderation. Oxford: Oxford University Press.Find this resource:
Wilkinson, L., & APA Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594–604.Find this resource:
Woodworth, R. S. (1926). Dynamic psychology. In C. Murchison (Ed.), Psychologies of 1925 (pp. 111–126). Worcester, MA: Clark University Press.Find this resource:
Wright, S. (1921). The theory of path coefficients: A reply to Niles’s criticism. Genetics, 8, 239–255.Find this resource:
Wyszynski, C. M., Bricker, J. B., & Comstock, B. A. (2011). Parental smoking cessation and child daily smoking: A 9-year longitudinal study of mediation by child cognitions about smoking. Health Psychology, 30, 171–176.Find this resource:
Young, M. D., Lubans, D. R., Collins, C. E., Callister, R., Plotnikoff, R. C., & Morgan, P. J. (2015). Behavioral mediators of weight loss in the SHED-IT community randomized controlled trial for overweight and obese men. Annals of Behavioral Medicine, 49, 286–292.Find this resource:
Yuan, Y., & MacKinnon, D. P. (2009). Bayesian mediation analysis. Psychological Methods, 14, 301–322.Find this resource: