Training from an Organizational Psychology Perspective
Summary and Keywords
Training is the systematic processes initiated by the organization that facilitate relatively permanent changes in the knowledge, skills, or affect/attitudes of organizational members. Cumulative meta-analytic evidence indicates that training is effective, producing, on average, moderate effect sizes. Training is most effective when designed so that trainees are active and encouraged to self-regulate during training, and when it is well-structured and requires effort on the part of trainees. Additional characteristics of effective training are: The purpose, objectives, and intended outcomes of training are clearly communicated to trainees; the training content is meaningful, and training assignments, examples, and exercises are relevant to the job; trainees are provided with instructional aids that can help them organize, learn, and recall training content; opportunities for practice in a safe environment are provided; feedback is provided by trainers, observers, peers, or the task itself; and training enables learners to observe and interact with others. In addition, effective training requires a prior needs assessment to ensure the relevance of training content and provides conditions to optimize trainees’ motivation to learn. After training, care should be taken to provide opportunities for trainees to implement trained skills, and organizational and social support should be in place to optimize transfer. Finally, it is important that all training be evaluated to ensure learning outcomes are met and that training results in increased job performance and/or organizational effectiveness.
What Is Training?
Organizational training has been defined in a number of ways over the years. A classic definition offered by Goldstein and Ford (2002) defined training as the systematic acquisition of skills, rules, concepts, or attitudes that result in improved performance in another environment. In another popular training text, Noe (2010) defined training as a planned effort by a company to facilitate employees’ learning of job-related competencies. Note that the former definition focuses on acquisition (learning) by the trainee, whereas the latter emphasizes the deliberate effort by the organization. Further, Noe addresses competencies that are job-related, whereas Goldstein and Ford are more specific as to training content—what is acquired are “skills, rules, concepts, or attitudes” that facilitate performance improvement.
A more recent definition of training that builds on both these definitions was offered by Kraiger and Culbertson (2013), who defined training as the systematic processes initiated by the organization that result in the relatively permanent changes in the knowledge, skills, or attitudes of organizational members. This perspective is more useful than prior ones because it recognizes that learning can occur without training (e.g., incidental learning) and that training can occur without learning (in the case of ineffective training). It is also highlights the following important properties of training: (1) Training is deliberate or intentional on the part of the organization, and thus is distinct from self-directed learning which is initiated by the employee either formally or informally (Noe, Clarke, & Klein, 2014); (2) the outcomes of training are multidimensional—individuals responsible for training (trainers, training managers, etc.) should be able to specify precise knowledge, skills, affect or attitudes that can be acquired via training and that affect individual performance (Kraiger, Ford, & Salas, 1993); (3) training is most effective when it is systematic, when processes are in place to determine training requirements, prepare learners, evaluate training outcomes, and support learners back in the work environment (Salas, Tannenbaum, Kraiger, & Smith-Jentsch, 2012).
Training and Learning
While training does not guarantee learning, it is impossible to discuss training without referencing learning. What is learning? In the training literature, learning is often defined as simply what results from training: “Learning is a desired outcome of training when one acquires new knowledge or behavior through practice, study, or experience” (Salas, Tannenbaum, Kriger, & Smith-Jentsch, 2012, p. 77). A more useful definition tells us something about the process, or how the learner transitions from “not knowing” (or lacking a skill) to “knowing” (or executing the skill). In educational psychology, Mayer (2008, p. 761) provided a definition that includes the outcome but also incorporates the transitional processes:
(learning is) a change in the learner’s knowledge that is attributable to experience … [and] depends on … cognitive processing during learning and includes (a) selecting—attending to the relevant material; (b) organizing—organizing the material into a coherent mental representation; and (c) integrating—relating the incoming material with existing knowledge from long-term memory.
If the concept of “change in knowledge” is expanded to include changes in behaviors or skills, affect or attitude, this becomes a useful working definition. It highlights both the role of experience (which can include training content) and the mental activity on the part of the learner that successfully encodes and stores the information.
Returning to the definition of training, we can slightly modify the definition offered by Kraiger and Culbertson (2013) to the following: Training as the systematic processes initiated by the organization that facilitate relatively permanent changes in the knowledge, skills, or affect/attitudes of organizational members. Members will learn with or without training, but successful training optimizes the acquisition, retention, and retrieval of training content. Training research then can be organized around approaches or interventions that best facilitate and support learning, as well as on ways of measuring change.
Does Training Facilitate Learning?
For many decades, training research stagnated, leading Campbell (1971) in his Annual Review chapter to conclude that the field of training was “voluminous, non-empirical, non-theoretical, poorly written, and dull” (p. 565). Yet, 30 years later, Salas and Cannon-Bowers (2001) not only pronounced a “decade of progress” (in theory and research), but that we now had a “science of training.” Instrumental in this development were three papers published in the late 1980s and early 1990s. First, Baldwin and Ford (1988) examined why knowledge and skills learned in training were not only evident back on the job. In developing their transfer of training model, they drew attention to the distinction between learning in training and transfer on the job, and highlighted the importance of understanding training as one event in a broader organizational context. Noe’s (1986) training effectiveness model similarly adapted a systems perspective, but also included the internal world of the learner. According to Noe, while the extent to which training affects later trainee behavior and performance depends in part on training delivery, both organizational-level variables (e.g., organizational support) and within-person variables (e.g., trainee self-efficacy or motivation) are influential in driving training outcomes and worthy of research attention. In sum, both Baldwin and Ford and Noe argued that the effectiveness of training as much or more on broader systems variables than on training delivery.
While these papers broadened our perspective of training effectiveness, Kraiger, Ford, and Salas (1993) focused attention inward to the mental processes of the learner. For decades, training practitioners and researchers emphasized “behavioral objectives” as the standard for designing and evaluating training—what the trainee could be expected to be able to do after training. This narrow focus ignored the so-called cognitive revolution in other areas of psychology (Gardner, 1987) that recognized that learning occurs in many ways including changes in appreciation (for), beliefs (in), understanding (of), mental models (about), and so forth. While Kraiger et al. was intended as more of a call for sharper training design, it had a significant impact on how training researchers have sought to evaluate training programs (Aguinis & Kraiger, 2009; Ford, Kraiger, & Merritt, 2010; Salas et al., 2012).
Evidence That Training Works
So, does training work? Cumulative evidence says yes. Salas et al. (2012) presented summaries of eight separate meta-analyses on training effectiveness. First, in an omnibus meta-analysis of all obtainable training studies, Arthur, Bennett, Edens, and Bell (2003) reported an overall effect size (d) of .63 for learning criteria and .62 for behavioral criteria. Taylor, Russ-Eft, and Chan (2005) examined the effectiveness of behavior modeling training, and found d’s over 1.00 for knowledge criteria, .29 for attitudes, and .25 for behavioral outcomes. Keith and Frese (2008) found an average d of .44 across criteria for error management training. Finally, and not reported by Salas et al., Mattingly, Kraiger, and Huntington (2016) reported mean effect sizes across outcome types between .42 and .53 for emotional intelligence training.
Other meta-analyses reported by Salas et al. (2012) have examined the effectiveness of training in specific populations. Two meta-analyses have examined the effectiveness of training for managers. Examining managerial training, Burke and Day (1986) reported overall effect sizes of .34 to .38 for learning criteria and .49 for behavioral criteria. In a subsequent meta-analysis, Powell and Yalcin (2010) reported lower, but still positive effect sizes: .17 to .55 for learning criteria and .17 to .30 for behavioral criteria. Examining broader managerial leadership development (which includes training), Collins and Holton (2004) reported effect sizes of .35 for knowledge outcomes and .40 for behavioral criteria using rigorous pretest–posttest with control group designs, and stronger effect sizes with less rigorous designs.
Two additional meta-analyses addressed team training effectiveness. Salas, Nichols, and Driskell (2007) examined the effectiveness of three forms of team training: cross-training, team coordination and adaptation training, and guided team self-correction training. Across criterion types, the researchers reported mean effect sizes between .45 and .61. In a broader investigation of all forms of team training, Salas et al. (2008) reported mean ds of .42 for cognitive outcomes, .35 for affective outcomes, and .39 for performance outcomes.
While reported mean effect sizes differ based on the scope of the meta-analysis and the criterion of interest, all reported mean effect sizes were significant. What is the practical impact of training? In a gross exercise of combining apples and oranges, the median effect size reported across the aforementioned meta-analyses and criteria is .46, a “moderate” sized effect (Cohen, 1988). A d of .46 would mean that the average participant in a training group would be at the 68th percentile of the control group distribution. Examined yet another way using the Binomial Effect Size Display (Randolph & Edmondson, 2005), a d value of .46 would mean, in practice, that while 38.8% of a control (no-training) group would be expected to improve, 62.1% of those receiving training would do so. Thus, training on average results in a 23% improvement over no training. Note that this effect size is for all forms of training; the expected effect size would likely be better if generated only from studies in which: (1) evaluation criteria are matched to training objectives (Kraiger & Jung, 1997); (2) principles of effective instruction are built into training (see Core Instructional Processes); and (3) the broader training system is designed to identify and support the learning of training needs (see Pretraining Influences).
Characteristics of Effective Training
Not all training is equally successful in achieving intended outcomes. Cumulative research has identified a number of training-related variables that characterize effective training. First, learning should be an active process of the trainee. Bell and Kozlowski (2008) characterized active learning approaches in training as those that encourage learners to ask questions, explore primary and peripheral content, seek feedback, and reflect on content and progress. Thus, active learning in instruction encourage ownership of material and investment in learning processes, thus distinguishing it from more passive learning modes such as listening to lectures or watching videos. In general, active learning can be facilitated by structuring training so that trainees have the opportunity and encouragement to question training content (e.g., would this principle apply in situations not covered in training?), spontaneously produce variations of content (e.g., what are other ways to build rapport with another person?), practice new skills, receive immediate and specific feedback (e.g., remember to put on your oxygen mask before assisting others), and reflect on new material (e.g., what were the three most important concepts you learned today)?
Closely related to the practice of active learning is self-regulation. Self-regulation in learning is the “modulation of affective, cognitive, and behavioral processes throughout a learning experience to reach a desired level of achievement” (Sitzmann & Ely, 2011, p. 421). Self-regulation is generally a learner-centric process, one resulting from self-set mastery goals and one that can lead to a number of effective learner activities such as planning, monitoring, allocating effort and attention, and evaluating and responding to progress towards learning objectives. While it is speculated that there are individual differences in the extent to which learners spontaneously engage in self-regulatory activities (Winne, 1996), there is also experimental evidence that prompting self-regulation can result in effective learner practices and greater learning in training (e.g., Berthold, Nückles, & Renkl, 2007; Sitzmann, Bell, Kraiger, & Kanar, 2009; Sitzmann & Ely, 2010). For example, Sitzmann et al. reported two studies in which simple prompts inserted periodically in training slides (e.g., are you ready now to be tested?) significantly improved training performance. In a meta-analysis by Sitzmann and Ely (2011), self-regulatory mechanisms prompting goal level, persistence, effort, and self-efficacy played a significant instrument role in learning after accounting for prior knowledge and cognitive ability. Thus, it is important for training programs to be designed to both encourage self-regulation and assist learners to engage in effective self-regulation (DeRouin, Fritzsche, & Salas, 2005).
One mechanism for encouraging self-regulation and increasing engagement is to provide sufficient structure. This includes providing program-level and module-level overviews of instructional objectives, helping learners understand the relevance of training content, and providing regular feedback on learning performance. It has been well-established that providing learners control over the instructional environment is either not productive or counterproductive (Kraiger & Jerden, 2007; Landers & Reddock, in press). In the absence of structure, learners may not engage in effective planning, or implement ineffective learning strategies (Sitzmann & Johnson, 2012). One strategy for providing structure is to provide an advanced organizer. An advanced organizer is an outline or framework of training content; examples include a list of learning objectives, a course outline, or examples of how the training is used back on the job (Mayer, 1979). Advanced organizers can help learners focus on what’s important in training and have been found to improve learning (Luiten, Ames, & Ackerson, 1980; Stone, 1983). In technology-distributed instruction, another option for providing structure is adaptive guidance (Bell & Kozlowski, 2002a, 2008, 2010). Adaptive guidance is a form that training that provides autonomy to learners to explore the training domain, while simultaneously providing them with diagnostic and interpretive information to assist their learning decisions. For example, the learning software may suggest that, based on their training performance, the trainee should review key material before moving on to another module.
In addition, it is important to make sure training is designed in a way that requires learners to exert “optimal effort” in training. What optimal is varies by content area and trainee, but training materials should neither be too hard nor too easy for learners. It is well established that those training conditions which accelerate skill acquisition often diminish retention and transfer of learned skills (Schmidt & Bjork, 1992). Though there a number of reasons this may be so, a primary one is that skill acquisition can be accelerated through drill and practice, as well as by narrowing the variety in stimuli that elicit the desired response. In recreational sports, this is akin to learning tennis by hitting against a machine that always places the ball in the same space and at the same speed, but then floundering against an actual opponent who hits the ball to different locations.
One of the best ways to ensure effort during learning is to build practice variability into the learning paradigm (Schmidt & Bjork, 1992). Practice variability simply means that the learning and practice conditions are allowed to vary (randomly or systematically) across learning trials. In one training demonstration of this effect, Holladay and Quiñones (2003) had undergraduate trainees learn a complex computer-based computer naval air defense simulation making a series of decisions based on attributes of the enemy target. The number of attributes was either constant or variable, and—consistent with prior cognitive science research—participants demonstrated better learning outcomes in response to the variable practice attributes. Practice variability slows learner acquisition, but is thought to help learners build more complex mental models of task performance and enhance their self-efficacy for performance.
Two other mechanisms for promoting effortful learning are action learning and Desirable Difficulties. Previously discussed, action learning is an empirically-supported strategy for increasing trainees’ learner attention to training (Bell & Kozlowski, 2002a, 2010). Action learning also promotes control of the learning environment (preferably with instructional guidance) and is believed to trigger a number of inductive processes by learners such as inferring rules, principles, and performance strategies. In the cognitive domain, Bjork and Bjork (2011) recommend the Desirable Difficulties principle for increasing learner effort and improving retention. Desirable Difficulties refer to empirically supported instructional strategies that prompt learner cognitive effort in the form of encoding and retrieval processes involved in learning, comprehension, and remembering. Examples of Desirable Difficulties include spacing (rather than massing) learning sessions, interleaving training content from multiple domains, testing rather than restudying, or having learners generate definitions (rather than be told) (see Dunlosky, Rawson, Marsh, Nathan, & Willingham (2013). Both action learning and Desirable Difficulties work by promoting deeper processing and self-regulation in learners (Bell & Kozlowski, 2010; Bjork, Dunlosky, & Kornell, 2013).
Core Instructional Processes
Each of the training strategies discussed so far represent emergent methods for engaging learners and optimizing training effectiveness. Many are also rooted in modern theories of how we learn. However, there are a number of “tried-and-true” training practices that also increase training effectiveness. Noe and Colquitt (2002) conducted a broad qualitative review of training literature and identified six characteristics of well-designed instruction that enhance learning. They are as follows: (1) The purpose, objectives, and intended outcomes of training are clearly communicated to trainees. Consistent with Mayer’s (2008) organizing step in learning, these can be communicated at the outset of training. They should also be provided when trainees register for (or are assigned to) training to ensure the proper mindset for training. (2) The training content is meaningful, and training assignments, examples, and exercises are relevant to the job. This both supports the integration of new information with prior knowledge (Mayer’s third step), increases motivation to learn (Noe, 1986), especially for older learners (Callahan, Kiker, & Cross, 2003), and facilitates transfer of training (Baldwin & Ford, 1988). (3) Trainees are provided with instructional aids that can help them organize, learn, and recall training content. These aids facilitate selection and organizing processes (Mayer, 2008) and can also reduce cognitive load during learning (Seufert & Brünken, 2006). (4) Opportunities for practice in a safe environment are provided. Carefully designed practice increases cognitive effort during training and also facilitates later transfer. A safe environment encourages learners to take risks and make errors, which can be advantageous during learning (Keith & Frese, 2008). The benefits of practice for skill acquisition in a wide range of performance domains is well documented (McNamara, Hambrick, & Oswald, 2014). (5) Feedback on learning and mastery is provided by trainers, observers, peers, or the task itself. Again, the benefits of feedback on performance improvement are well established (Azevedo & Bernard, 1995; Hatala, Cook, Zendejas, Hamstra, & Brydges, 2014; Kluger & DeNisi, 1996). Specific feedback that is close in time to behaviors performed in training allow incorrect behaviors to be stopped before habits are acquired, reinforce positive behaviors, and facilitate self-regulation by learners. (6) Training enables learners to observe and interact with others. Allowing interactions among participants promotes observational learning in which trainees can both mimic high performers and see high performers rewarded (Decker & Nathan, 1985). Additionally, it is sometimes the case that trainers know the content too well and struggle to explain it to a novice, whereas a fellow trainee who just acquired new knowledge or a new skill may be able to help other trainees make the jump to understanding the new content. Finally, interaction with other learners is critical to socially deriving meaning in learning environments: “learners (can) interpret, clarify and validate their understanding through sustained dialogue (i.e., two-way communication) and negotiation” (Garrison, 1993, p. 202). Further, Kraiger (2008) argued that through this social negotiation during training, trainees learn social skills that will promote later sense-making and learning back on the job.
Characteristics of Effective Systems for Training
The extent to which trainees acquire and apply new knowledge and skills is not only a function of training delivery, but also the larger training system. Thus far, we have examined primarily instructional (in-training) processes. In this section, we examine factors before and after training takes place. As already noted, Noe (1986) was the first to call attention to individual-level and system-level influences on training impact. Since then, other researchers have proposed and tested even broader models of training effectiveness (e.g., Cannon-Bowers, Salas, Tannenbaum, & Mathieu, 1995; Colquitt, LePine, & Noe, 2000). Here training effectiveness refers to individual and system-level influences on the extent to which knowledge and skills are learned, transferred to the job, and maintained. Following Salas et al. (2012), this discussion is divided into pretraining and posttraining influences.
One of the primary actions that an organization can do to increase training effectiveness is to conduct a training needs assessment (Goldstein & Ford, 2002). Conceptually, a training needs assessment is a multi-step process to ensure that training targets the requisite knowledge and skills for employees to do their jobs and to ensure that the organization at all levels are in position to support training efforts. In a classic needs assessment, three phases are conducted (Goldstein & Ford, 2002). An organizational assessment identifies whether there is organizational support (e.g., interest and resources) and whether training can be tied to broader organizational initiatives (e.g., lean production). To a large extent, the results of organizational assessment determine whether or not training is likely to be chosen as an intervention. The second phase is a job or task analysis, in which the key tasks to be training and/or the core skills to perform those tasks are identified. The job/task assessment identifies the training content. The third phase is a person analysis, in which the training participants are identified (e.g., low performers) and/or special characteristics of participants are investigated (e.g., familiarity and comfort with mobile learning). The former focus assumes that not all workers in a job will be assigned to training. The latter assumes that training can be customized to learner preferences or attributes. Both assumptions are often not valid in many training contexts.
Within the training field, there is also an implicit assumption that the three phases are conducted in the order presented, that is, that the organizational assessment precedes the job/task analysis, which in turn precedes the person analysis. However, there is no research that supports this ordering over another. Logically, if an organizational assessment reveals inadequate support and resources for training, there would be no need for the job and person analysis. However, it could also be the case that the decision to support training is dependent on what specifically needs to be trained or on how many employees have subpar performance. Further, assessing worker performance (part of the person analysis) could yield insights as to which tasks should be the focus of the job/task analysis. Although the three phases were proposed many decades ago (Goldstein, 1974; McGehee & Thayer, 1961), we still know very little about the practical implications over variations in their ordering.
A more recent training needs assessment model is more prescriptive about the order of steps and is also more practical in many applied contexts. Surface (2012) has proposed a four-phase training needs assessment (TNA) process, which consists of Needs Identification Phase (addressing whether or not a TNA should be conducted), Needs Specification Phase (identifying specific existing knowledge, skill, or performance gaps and determining whether learning can address those gaps), Training Needs Assessment Phase (aspects of traditional organizational, task, and person analyses, as determined by the prior phase), and TNA Evaluation Phase (determining the impact of earlier decisions during earlier phases on the identified gaps). The four-phase process has many potential advantages over prior models. Primarily, it helps stakeholders focus on critical decisions at different times. Additionally, decision-making (and analysis) resources are conserved, since an all-out organizational/task/personal analysis is only conducted as warranted. Finally, it clarifies for decision-makers that training is only one of many possible solutions to perceived performance gaps (cf. Robinson & Robinson, 1995).
Despite the practical importance of training needs assessment, there is little evidence that it is frequently done to support training design. Both the Association for Talent Development and Training magazine conduct yearly surveys of training practices in U.S. companies. Neither survey tracks expenditures or activities related to needs assessment. In the 2014 survey reported by Training (2014), they reported on 23 categories of anticipated expenditures for 2015. Although there is no category for TNA, there is one for “Content Development,” and presumably some training professionals anticipating developing new content would conduct a needs assessment prior to doing so. Only 29% of surveyed companies reported plans to invest in content development, thus setting the upper bound on estimates of the frequency of TNA. A more somber estimate can be drawn from Arthur et al.’s (2003) meta-analysis of training effectiveness. The authors reported that for only 6% of the reported effect sizes was there a prior needs assessment.
While the disconnect between the importance and prevalence of needs assessment is more of a concern for practice than for science, there are nonetheless research implications. For one, it would be valuable to understand better why training needs assessment is not done more frequently. There are likely high perceived costs in time and resources, but it would be interesting to approach the problem from a cost–benefit approach. There are multiple financial benefits of conducting a needs assessment: unnecessary training is reduced, individuals who do not need to be trained are kept on the job and not sent to training, and training is designed to maximize training outcomes. Applied research could investigate whether training professionals are aware of the benefits and to what extent to these benefits factor into training investment decisions relative to the costs of needs assessments. For example, at what point do the potential savings of from a well-designed needs assessment outweigh the costs in the minds of training stakeholders? There have also been occasional calls for rapid or just-in-time training needs assessment in which training professionals are able to stand-up micro training programs quickly in response to emerging performance problems (e.g., Leat & Lovell, 1997; Wilson, Jonassen, & Cole, 1993). The effectiveness of these approaches and their attractiveness and credibility relative to traditional training programs are also interesting research questions. Finally, broader questions of how best to position training in general and needs assessment in particular as strategic investments on the part of the organization bear greater attention from training researchers (Iqbal & Khan, 2011).
In addition to the importance of linking training content to a training needs assessment, the other critical factor to be addressed pretraining is to ensure trainees are motivated to learn. Motivation to learn refers to the extent to which are willing to exert effort in training environment (Noe & Schmitt, 1986) and consists of both perceived value of mastering the training content and their beliefs that they are capable of learning. Motivation to learn has a number of positive impacts on the training function—trainees are more likely to choose to attend training, they exert more effort and persist in training activities, and they are more likely to transfer learned skills to the job (Quiñones, 1995). Trainee motivation to learn is partly determined by individual characteristics including proactive and Big Five personality variables (Major, Turner, & Fletcher, 2006), as well as self-efficacy and a mastery orientation (see Colquitt, LePine, & Noe, 2000; Noe & Colquitt, 2002, for a review). However, motivation to learn is also influenced by factors under the control of the organization, including how the need for training is framed to trainees, e.g., mandatory or optional, remedial or developmental (Baldwin, Magjuka, & Loher, 1991; Hicks & Klimoski, 1987; Tsai & Tai, 2003) and whether employees believe they were fairly chosen for training (Quiñones, 1995). Additionally, motivation to learn is greater when training is promoted as intrinsically beneficial to the learner (Kooij, De Lange, Jansen, Kanfer, & Dikkers 2011). Perceived or anticipated support for training can also affect motivation to learn. For example, when trainees perceive that the training is sanctioned by their supervisor, they will have the opportunity to use what they learned in training, and they will have both supervisor and peer support for training, their motivation for training is greater (Kim-Soon, Ahmad, & Ahmad, 2014). Finally, and not surprisingly, prior experiences in training can influence future motivation to learn. Sitzmann, Brown, Ely, and Kraiger (2009) measured motivation at multiple time points in one organization and found that trainees who had negative reactions to earlier training courses had lower training motivation in later courses. For more on organizational-level influences, see Colquitt and associates (Colquitt et al., 2000; Noe & Colquitt, 2002).
Motivation to learn could be assessed as part of a prior needs assessment, measured at the outset of training, or gauged by the trainer based on prior knowledge of the trainees and the course material. Given its importance to training outcomes, it is important that training be designed to maximize trainee motivation regardless of pre-existing levels. In general, motivation to learn can be maximized to the extent that supervisors have already communicated the importance and value of the training, the trainer or training materials clearly communicate what is to be learned and how this information is relevant and useful to trainees, training content links new information to prior knowledge of trainees or their prior experiences, and plans to support the trainee back on the job are clearly communicated.
Training has been shown to have a significant impact on learning and performance outcomes in multiple meta-analyses. Training effectiveness can be enhanced through well-designed training that incorporates empirically supported instructional principles and best practices. It can also be enhanced through conducting a needs assessment to ensure that the training content is relevant to the organization and the trainees, and it can be enhanced through attention to trainees’ motivation to learn. It can also be enhanced through posttraining factors that foster the implementation, maintenance, and generalization of knowledge and behaviors acquired in training.
Transfer of training refers to the extent to trainees apply the knowledge, skills, and attitudes acquired in training and on the job (Baldwin & Ford, 1988; Baldwin, Ford, & Blume, 2009). Kraiger (2002) distinguished between implementing or displaying a behavior on the job, on the one hand, and the effectiveness of that behavior, on the other. While the goal of training is typically to improve trainee job performance (increase behavioral effectiveness), the immediate transfer goal should be that trainees carry out learned behaviors on the job (irrespective of effectiveness). For example, suppose a human resource specialist is instructed on how to ask probing questions in a structured interview. Transfer first occurs when the specialist uses probing questions during an actual structured interview. But, it could the case that doing so is not “natural” and sounds forced, or there are some probes that work more effectively than others. There is evidence of transfer because the correct behaviors were displayed. Behavioral effectiveness, or performance, comes later with practice, feedback, and personalization of the questions. The immediate goal, however, is the display. In additional to personal and situational factors discussed in this section, training fidelity—the psychological and physical relationship between the training and work context—facilitates this form of transfer (Baldwin & Ford, 1988; Burke & Hutchins, 2007; Holton & Baldwin, 2003).
Baldwin and Ford (1988) further distinguished between maintenance and generalization. If a behavior that is attempted gets discarded, transfer has not occurred. The behavior must be learned well enough that it is repeated and maintained over time. Generalization refers to the extent to which the learned behavior is executed in response to situations beyond those covered in training. For example, if the human resource specialist applies similar probing questions for a variety of jobs, or when conducting exit interviews, generalization has occurred.
Transfer of training can be influenced by variables present before training, during training, and after training (Kraiger & Culbertson, 2013). For example, pretraining motivation to learn is related both to posttraining motivation to transfer (Kontoghiorghes, 2002, 2004) and to actual transfer of training (Gegenfurtner & Vauras, 2012)). Many of the training design factors previously discussed, such as practice variability, are positive influences on transfer of training. This section thus focuses primarily on posttraining factors.
Posttraining factors influencing the transfer of training back to the job can be divided into characteristics of the trainee, the work, peers and supervisors, and the transfer or organizational climate. These broad factors were evaluated empirically in a recent meta-analysis by Blume, Ford, Baldwin, and Huang (2010). With respect to trainee characteristics, not surprisingly, trainee motivation to transfer is predictive of actual transfer back to the job. In their meta-analysis of 89 studies, Blume et al. found a mean population correlation of .29 between motivation and actual transfer. Consistent with the Theory of Planned Behavior (Ajzen, 1991), trainees who intend to apply what they learned are more apt to do so. However, a number of trainee characteristics also predicted transfer, including cognitive ability (ρ = .37), conscientiousness (ρ = .28), pretraining self-efficacy (ρ = .22), learning goal orientation (ρ = .16), and job involvement (ρ = .38). Essentially, workers who are most likely to be the “best trainees” are also those most likely to succeed applying what they learned back on the job.
Another strong determinant of transfer is what Ford and colleagues referred to as “opportunity to perform,” or the extent to which trainees can immediately apply what they learned to relevant job situations (Ford, Quiñones, Sego, & Sorra, 1992; Quiñones, Ford, Sego, & Smith, 1995). Opportunity to perform may be attenuated, for example, when a trainee is assigned to work that is different than what he or she was trained in or when there is organizational pressure for high job performance rather than experimenting with new job skills. Low opportunity to perform has a negative impact on trainee motivation to transfer (Seyler, Holton, Bates, Burnett, & Carvalho, 1998), which in turn is negatively related to actual transfer (Axtell, Maitlis, & Yearta, 1997).
Social support is important for learned skills to be attempted, maintained, and generalized on the job. Both peers and supervisors can support trainees by inquiring about what was learned, encouraging attempts to transfer, supporting mistakes, and reinforcing successful attempts (Rouiller & Goldstein, 1993; Tracey, Tannenbaum, & Kavanagh, 1995). Blume et al. (2010) reported a significant, positive effect for social support (ρ = .21), although they did not differentiate between the impact of peers or supervisors. While there is tremendous variability in the studies of the effectiveness of social support on transfer (Aguinis & Kraiger, 2009; Kraiger & Culbertson, 2013), when social support does positively impact transfer, research suggests it does so through the mediating effects of trainee self-efficacy, motivation to transfer, and mastery goal orientation (Chiaburu, Van Dam, & Hutchins, 2010).
Finally, the broader transfer climate can influence the extent to which knowledge and skills acquired in training and are applied and maintained on the job. Fleishman, Harris, and Burtt (1955) was the first empirical study to show that a supportive climate (via the actions of managers) has an impact on the maintenance of trained behaviors. The first known measure of transfer climate was by Rouiller and Goldstein (1993), who used a critical incident approach to measure the situational cues (goals, social cues, and task cues) and consequences (feedback, reinforcers, and punishment) that either facilitate or inhibit transfer. Subsequently, Holton, Bates, and Ruona (2000) developed and conducted extensive validation on a broader, proprietary transfer climate measure.
While some individual studies have found that transfer climate improves transfer (e.g., Kontoghiorghes, 2004), other findings do not find significant effects (e.g., Cheng & Hampson, 2008). Blume et al. (2010) did report a significant, positive effect for transfer climate on actual transfer (ρ = .27), suggesting that transfer climate matters more often that it doesn’t. Kraiger and Culbertson noted that it is important to distinguish between transfer climate and (objective) organizational support, which is often measured from the perspective of the trainee. Transfer climate is defined as an aggregation of trainees’ perceptions of: (a) supervisor and peer support, (b) opportunity to perform, and (c) accountability and/or consequences for performing. Perceptions matter, but researchers who measure climate using perception-based measures run the risk of exaggerating effects via common method variance. For example, in their transfer meta-analysis, Blume et al., when possible, separated out studies with a same-source and same measurement context (SS/SMC) confound from ones that didn’t. Specific to the effects of work environment of transfer, the effect size for SS/SMC (ρ = .54) was over twice that for (ρ = .23).
While transfer climate research is often limited due to methodological problems, it seems undeniable that transfer is more likely to occur when there are opportunities to perform coupled with peer/supervisor support, as well as accountability and consequences for either performing or not performing. There is also evidence to suggest that perceived transfer climate differs across organizations, or organizational units, suggesting that climate is “real” beyond the eyes of trainees. For example, Holton, Chen, and Naquin (2003) found strong, significant differences in scale scores for peer support, supervisory support, supervisor sanctions, and opportunity to use learning across eight different organizations. Additionally, Bates and Khasawneh (2005) found significant differences across 28 organizations in scale scores for transfer climate dimensions of transfer effort–performance expectations, performance–outcome expectations, performance self-efficacy beliefs, and openness to change perceptions, and, more importantly, found these climate variables mediated the relationship between learning organizational culture and innovation.
In total, evidence supports the importance of the immediate transfer environment to reinforce what was learned in training and encourage application to the job. Salas et al. (2012) provided a number of practical recommendations for organizations interested in promoting transfer, including conducting debriefings of training or critical job events related to training content (Berenholtz et al., 2009; Brock, McManus, & Hale, 2009), access to job aids and knowledge repositories with reminders about what was learned (Gallupe, 2001; Rosenberg, 1995), and establishing “communities of practice” with other learners to foster continuous learning (Wenger, 1998; Wenger, McDermott, & Snyder, 2002).
A final characteristic of effective training is comprehensive evaluation. Kraiger et al. (1993) defined training evaluation as the collection and analysis of data to understand whether training objectives were achieved and/or whether meeting those objectives resulted in improved job performance on the job. Training evaluation thus becomes important to organizational decision-making about training (Kraiger, 2002)—Is training designed in a way that trainees are learning? Do trainees like the training they are receiving, and do they want to continue taking more training? Do knowledge and skills acquired in training transfer to the job? Have the performance gaps identified in a needs assessment gone away? Training evaluation closes the loop.
For decades, training evaluation practice has been influenced primarily by Kirkpatrick’s (Kirkpatrick & Kirkpatrick, 2006) “four levels model”: Do trainees like the training, did they learn anything, did they change their behavior back on the job, and did changes of the job result in improved performance? The four levels consist of trainees’ reactions to (or affect towards) the training content, whether or not they learned the training content, whether or not they change their behavior back on the job (as a result of being trained), and whether or not there is some organizational-level benefit to changes in job behavior.
While there are still training researchers who rely on the Kirkpatrick approach (e.g., Pineda-Herrero, Belvis, Moreno, Duran-Bellonch, & Ucar, 2011; Roszkowski & Soven, 2010), increasingly, researchers are recognizing the practical and theoretical shortcomings of this framework (e.g., Holton, 1996; Kraiger, 2002; Spitzer, 2005). In brief, the framework leads to a reflexive approach, with the goal of evaluation being to check more boxes (levels) rather than linking evaluation decisions to both the purpose of evaluation (Kraiger, 2002) and the learning constructs that are the focus of the training intervention (Kraiger, 2002; Kraiger et al., 1993).
In training research, there is evidence of more mindful training evaluation practices. Ford, Kraiger, and Merritt (2010) reviewed 125 studies citing Kraiger et al.’s learning outcomes taxonomy (1993) and found that researchers are increasingly adopting multidimensional approaches to understanding learning, measuring cognitive or affective change as measures of learning (instead of behavioral change alone), using increasingly sophisticated assessment methods (e.g., Davis & Yi, 2004; Day, Arthur, & Gettman, 2001), and measuring trainee affect other than training satisfaction (e.g., Bell & Kozlowski, 2002b; Wallhead & Ntoumanis, 2004).
Additionally, there have been a number of thoughtful papers on aspects related to technical evaluation issues. Arvey and Cole (1989) discussed the evaluation of evaluation data with control groups and pretest scores available, and demonstrated the advantage of using Analysis of Covariance in these situations. Yang, Sackett, and Arvey (1996) reviewed the importance of statistical power in training evaluation designs and recommended several strategies for increasing power when the number trained is constrained. However, Haccoun and Hamtiaux (1994) recognized that in many applied situations, a control group is not possible, and proposed a useful technique for inferring learning when there is not a control group or even a pretest. Sitzmann and Weinhardt (in press) presented a new training engagement model that recommends both multi-level and temporal considerations in evaluating training impact.
Finally, both training professionals and researchers will argue that while thorough evaluation is important, it is often difficult to achieve in practice. Organizations may resist evaluation because evaluation time detracts from training time, and negative evaluation outcomes can be threatening to training champions. Training researchers have responded with multiple strategies for building support for training evaluation. Brinkerhoff (2005) provided multiple methods for building support for linking evaluation practices to key strategic objectives of the organization. Nickols (2005) integrated stakeholder theory with evaluation practices, and both he and Kraiger, McLinden, and Casper (2004) discussed strategies for involving key stakeholders in the design of training evaluation.
The rate of change and need for adaptation in organizations is continually increasing. Challenges created by rapid design and prototyping, leaner organizations, and the global marketplace make it increasingly important that organizations compete on the basis of talent, rather than relying on customer loyalty or product design (Boudreau & Ramstad, 2005). However, there is cumulative evidence that continual investment in the training and development of employees results in a competitive edge for organizations (Park & Jacobs, 2011; Saks & Burke-Smalley, 2014; Tharenou, Saks, & Moore, 2007), particularly when coupled with effective staffing programs (Kim & Ployhart, 2014; Van Iddekinge, Ferris, Perrewé, Perryman, Blass, & Heetderks, 2009).
However, training practice and training research will continue to be challenged by changes in how work is done (e.g., increasing use of technology) and where we work (e.g., the virtual workplace). Information is more available and more rapidly disseminated than ever before, but more information doesn’t necessarily mean more knowledge or better performance.
The argument has been continually made that training is most effective when systematic processes are followed—conducting a needs assessment, careful training design, planning and supporting transfer, and ongoing evaluation. As the demand on the workforce changes, and as the nature of work changes, it will be important for training professionals and training researchers to not only continue to push for systematic, mindful practice, but to evaluate continually whether those same processes should be preserved hold on or warrant reexamination themselves.
Aguinis, H., & Kraiger, K. (2009). Benefits of training and development for individuals and teams, organizations, and society. Annual Review of Psychology, 60, 451–474.Find this resource:
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179–211.Find this resource:
Arthur Jr, W., Bennett Jr, W., Edens, P. S., & Bell, S. T. (2003). Effectiveness of training in organizations: a meta-analysis of design and evaluation features. Journal of Applied Psychology, 88, 234–245.Find this resource:
Arvey, R. D., & Cole, D. A. (1989). Evaluating change due to training. In I. L. Goldstein & Associates (Eds.), Training and development in organizations (pp. 89–117). San Francisco: Jossey-Bass.Find this resource:
Axtell, C. M., Maitlis, S., & Yearta, S. K. (1997). Predicting immediate and longer-term transfer of training. Personnel Review, 26, 201–213.Find this resource:
Azevedo, R., & Bernard, R. M. (1995). A meta-analysis of the effects of feedback in computer-based instruction. Journal of Educational Computing Research, 13, 111–127.Find this resource:
Baldwin, T. T., & Ford, J. K. (1988). Transfer of training: A review and directions for future research. Personnel Psychology, 41, 63–105.Find this resource:
Baldwin, T. T., Ford, J. K., & Blume, B. D. (2009). Transfer of training 1988–2008: An updated review and new agenda for future research. In G. P. Hodgkinson & J. K. Ford (Eds.), International Review of Industrial and Organizational Psychology (Vol. 24, pp. 41–70). Chichester, U.K.: Wiley.Find this resource:
Baldwin, T. T., Magjuka, R. J., & Loher, B. T. (1991). The perils of participation: Effects of choice of training on trainee motivation and learning. Personnel Psychology, 44, 260–267.Find this resource:
Bates, R., & Khasawneh, S. (2005). Organizational learning culture, learning transfer climate and perceived innovation in Jordanian organizations. International Journal of Training and Development, 9, 96–109.Find this resource:
Bell, B. S., & Kozlowski, S. W. (2002a). Adaptive guidance: Enhancing self‐regulation, knowledge, and performance in technology‐based training. Personnel Psychology, 55, 267–306.Find this resource:
Bell, B. S., & Kozlowski, S. W. J. (2002b). Goal orientation and ability: Interactive effects on self-efficacy, performance, and knowledge. Journal of Applied Psychology, 87, 497–505.Find this resource:
Bell, B. S., & Kozlowski, S. W. J. (2008). Active learning: Effects of core training design elements on self-regulatory processes, learning, and adaptability. Journal of Applied Psychology, 93, 296–316.Find this resource:
Bell, B. S., & Kozlowski, S. W. J. (2010). Toward a theory of learner-centered training design: An integrative framework of active learning. In S. W. J. Kozlowski & E. Salas (Eds.), Learning, training, and development in organizations (pp. 263–300). New York: Routledge.Find this resource:
Berenholtz, S. M. Schumacher, K., Hayanga, A. J. Simon, M., Goeschel, C., Pronovost, P. J. et al. (2009). Implementing standardized operating room briefings and debriefings at a large regional medical center. Joint Commission Journal on Quality and Patient Safety, 35, 391–397.Find this resource:
Berthold, K., Nückles, M., & Renkl, A. (2007). Do learning protocols support learning strategies and outcomes? The role of cognitive and metacognitive prompts. Learning and Instruction, 17, 564–577.Find this resource:
Bjork, E. L., & Bjork, R. A. (2011). Making things hard on yourself, but in a good way: Creating desirable difficulties to enhance learning. In M. A. Gernsbacher, R. W. Pew, L. M. Hough, & J. R. Pomerantz (Eds.), Psychology and the real world: Essays illustrating fundamental contributions to society (pp. 56–64). New York: Worth Publishers.Find this resource:
Bjork, R. A., Dunlosky, J., & Kornell, N. (2013). Self-regulated learning: Beliefs, techniques, and illusions. Annual Review of Psychology, 64, 417–444.Find this resource:
Blume, B. D., Ford, J. K., Baldwin, T. T., & Huang, J. L. (2010). Transfer of training: A meta-analytic review. Journal of Management, 36, 1065–1105.Find this resource:
Boudreau, J., & Ramstad, P. (2005). Talentship, talent segmentation, and sustainability: A new HR decision science paradigm for a new strategy definition, Human Resource Management, 44, 129–136.Find this resource:
Brinkerhoff, R. O. (2005). The success case method: A strategic evaluation approach to increasing the value and effect of training. Advances in Developing Human Resources, 7(1), 86–101.Find this resource:
Brock, G. W., McManus, D. J., & Hale, J. E. (2009). Reflections today prevent failures tomorrow. Communications of the ACM, 52, 140–144.Find this resource:
Burke, L. A. & Hutchins, H. M. (2007). Training transfer: An integrative literature review. Human Resource Development Review, 6, 263–96.Find this resource:
Burke, M. J., & Day, R. R. (1986). A cumulative study of the effectiveness of managerial training. Journal of Applied Psychology, 71, 232–245.Find this resource:
Callahan, J. S., Kiker, D. S., & Cross, T. (2003). Does method matter? A meta-analysis of the effects of training method on older learner training performance. Journal of Management, 29, 663–680.Find this resource:
Campbell, J. P. (1971). Personnel training and development. Annual Review of Psychology, 22, 565–602.Find this resource:
Cannon-Bowers, J. A., Salas, E., Tannenbaum, S. I., & Mathieu, J. E. (1995). Toward theoretically based principles of training effectiveness: A model and initial empirical investigation. Military Psychology, 7, 141–164.Find this resource:
Cheng, E. W., & Hampson, I. (2008). Transfer of training: A review and new insights. International Journal of Management Reviews, 10, 327–341.Find this resource:
Chiaburu, D. S., Van Dam, K., & Hutchins, H. M. (2010). Social support in the workplace and training transfer: A longitudinal analysis. International Journal of Selection and Assessment, 18, 187–200.Find this resource:
Cohen, J. (1988). Statistical power analysis for the social sciences (2nd ed.). Mahwah, NJ: Erlbaum.Find this resource:
Collins, D. B., & Holton, E. F., III. (2004). The effectiveness of managerial leadership development programs: A meta-analysis of studies from 1982 to 2001. Human Resource Development Quarterly, 15, 217–248.Find this resource:
Colquitt, J. A., LePine, J. A., & Noe, R. A. (2000). Towards an integrative theory of training motivation: A meta-analytic path analysis of 20 years of research. Journal of Applied Psychology, 85, 678–807.Find this resource:
Davis, F. D., & Yi, M. Y. (2004). Improving computer skill training: Behavior modeling, symbolic mental rehearsal, and the role of knowledge structures. Journal of Applied Psychology, 89, 509–523.Find this resource:
Day E. A., Arthur W., & Gettman, D. (2001). Knowledge structures and the acquisition of a complex skill. Journal of Applied Psychology, 86, 1022–1033.Find this resource:
Decker, P. J., & Nathan, B. R. (1985). Behavior modeling training: Principles and applications. New York: PraegerFind this resource:
DeRouin, R. E., Fritzsche, B. A., & Salas, E. (2005). Optimizing e‐learning: Research‐based guidelines for learner‐controlled training. Human Resource Management, 43, 147–162.Find this resource:
Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques promising directions from cognitive and educational psychology. Psychological Science in the Public Interest, 14(1), 4–58.Find this resource:
Fleishman, E. A., Harris, E. F., & Burtt, H. E. (1955). Leadership and supervision in industry (Report No. 33). Columbus, OH: Bureau of Educational Research, The Ohio State University.Find this resource:
Ford, J. K., Kraiger, K., & Merritt, S. M. (2010). An updated review of the multidimensionality of training outcomes: New directions for training education research. In S. W. J. Kozlowski & E. Salas (Eds.), Learning, training, and development in organizations (pp. 135–168). New York: Routledge.Find this resource:
Ford, J. K., Quiñones, M. A., Sego, D. J., & Sorra, J. S. (1992). Factors affecting the opportunity to perform trained tasks on the job. Personnel Psychology, 45, 511–527.Find this resource:
Gallupe, B. (2001). Knowledge management systems: Surveying the landscape. International Journal of Management Reviews, 3, 61–77.Find this resource:
Gardner, H. (1987). The mind’s new science: A history of the cognitive revolution. New York: Basic Books.Find this resource:
Garrison, D. R. (1993). A cognitive constructivist view of distance education: An analysis of teaching-learning assumptions. Distance Education, 14, 199–211.Find this resource:
Gegenfurtner, A., & Vauras, M. (2012). Age-related differences in the relation between motivation to learn and transfer of training in adult continuing education. Contemporary Educational Psychology, 37, 33–46.Find this resource:
Goldstein, I.L. (1974) Training: Program development and evaluation. Monterey, CA: Brooks/Cole.Find this resource:
Goldstein, I. L., & Ford, J. K. (2002). Training in organizations: Needs assessment, development and evaluation (4th ed.). Belmont, CA: Wadsworth.Find this resource:
Haccoun, R. R., & Hamtiaux, T. (1994). Optimizing knowledge tests for inferring learning acquisition levels in single group training evaluation designs: The internal referencing strategy. Personnel Psychology, 47, 593–604.Find this resource:
Hatala, R., Cook, D. A., Zendejas, B., Hamstra, S. J., & Brydges, R. (2014). Feedback for simulation-based procedural skills training: a meta-analysis and critical narrative synthesis. Advances in Health Sciences Education, 19, 251–272.Find this resource:
Hicks, W. D., & Klimoski, R. (1987). The process of entering training programs and its effect on training outcomes. Academy of Management Journal, 30, 542–552.Find this resource:
Holladay, C. L., & Quiñones, M. A. (2003). Practice variability and transfer of training: The role of self-efficacy generality. Journal of Applied Psychology, 88, 1094–1103.Find this resource:
Holton, E. F., Bates, R. A., & Ruona, W. E. (2000). Development of a generalized learning transfer system inventory. Human Resource Development Quarterly, 11, 333–360.Find this resource:
Holton, E. F., Chen, H. C., & Naquin, S. S. (2003). An examination of learning transfer system characteristics across organizational settings. Human Resource Development Quarterly, 14, 459–482.Find this resource:
Holton, E. F., III. (1996). The flawed four-level evaluation model. Human Resource Development Quarterly, 7, 5–21.Find this resource:
Holton, E. F., III, & Baldwin, T. T. (2003). Making transfer happen: An action perspective on learning transfer systems. In E. F. Holton III & T. T. Baldwin (Eds.), Improving learning transfer in organizations (pp. 3–15). San Francisco: Jossey-Bass.Find this resource:
Iqbal, M. Z., & Khan, R. A. (2011). The growing concept and uses of training needs assessment: A review with proposed model. Journal of European Industrial Training, 35, 439–466.Find this resource:
Keith, N., & Frese, M. (2008). Effectiveness of error management training: A meta-analysis. Journal of Applied Psychology, 93, 59–69.Find this resource:
Kim, Y., & Ployhart, R. E. (2014). The effects of staffing and training on firm productivity and profit growth before, during, and after the Great Recession. Journal of Applied Psychology, 99, 361–389.Find this resource:
Kim-Soon, N., Ahmad, N., & Ahmad, A. R. (2014). Moderating effects of work environment on motivation to learn and perceived training transfer: Empirical evidence from a bank. Australian Journal of Basic and Applied Sciences, 8, 344–361.Find this resource:
Kirkpatrick, D. L., & Kirkpatrick, J. D. (2006). Evaluating training programs: The four levels (3rd ed.). San Francisco: Berrett-Koehler.Find this resource:
Kluger, A. N., & DeNisi, A. (1996). The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin, 119, 254–284.Find this resource:
Kontoghiorghes, C. (2002). Predicting motivation to learn and motivation to transfer learning back to the job in a service organization: A new systemic model for training effectiveness. Performance Improvement Quarterly, 15(3), 114–129.Find this resource:
Kontoghiorghes, C. (2004). Reconceptualizing the learning transfer conceptual framework: Empirical validation of a new systemic model. International Journal of Training and Development, 8, 210–221.Find this resource:
Kooij, D. T., De Lange, A. H., Jansen, P. G., Kanfer, R. & Dikkers, J. S. (2011). Age and work‐related motives: Results of a meta‐analysis. Journal of Organizational Behavior, 32, 197–225.Find this resource:
Kraiger, K. (2002). Decision-based evaluation. In K. Kraiger (Ed.), Creating, implementing, and maintaining effective training and development: State-of-the-art lessons for practice (pp. 331–375). San Francisco: Jossey-Bass.Find this resource:
Kraiger, K. (2008). Transforming our models of learning and development: Web‐based instruction as enabler of third‐generation instruction. Industrial and Organizational Psychology, 1, 454–467.Find this resource:
Kraiger, K., & Culbertson, S. S. (2013). Understanding and facilitating learning: Advancements in training and development. In I. Weiner, N. Schmitt, & S. Highhouse (Eds.), Handbook of Psychology: Industrial and Organizational Psychology (2d ed.) (pp. 244–261). Hoboken, NJ: Wiley.Find this resource:
Kraiger, K., Ford, J. K., & Salas, E. (1993). Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. Journal of Applied Psychology, 78, 311–328.Find this resource:
Kraiger, K., & Jerden, E. (2007). A meta-analytic investigation of learner control: Old findings and new directions. In S. M. Fiore & E. Salas (Eds.), Toward a science of distributed learning (pp. 65–90). Washington, DC: American Psychological Association.Find this resource:
Kraiger, K., & Jung, K. M. (1997). Linking training objectives to evaluation criteria. In M. A. Quinones & A. Ehrenstein (Eds.), Training for a rapidly changing workplace: Applications of psychological research (pp. 151–175). Washington, DC: American Psychological Association.Find this resource:
Kraiger, K., McLinden, D., & Casper, W. J. (2004). Collaborative planning for training impact. Human Resource Management, 43, 337–351.Find this resource:
Landers, R. N., & Reddock, C. M. (in press). A meta-analytic investigation of objective learner control in web-based instruction. Journal of Business and Psychology.Find this resource:
Leat, M. J., & Lovell, J. M. (1997). Training needs analysis: weaknesses in the conventional approach. Journal of European Industrial Training, 21, 143–153.Find this resource:
Luiten, J., Ames, W., & Ackerson, G. (1980). A meta-analysis of the effects of advanced organizers on learning and retention. American Educational Research Journal, 17, 211–218.Find this resource:
MacNamara, B. N., Hambrick, D. Z., & Oswald, F. L. (2014). Deliberate practice and performance in music, games, sports, education, and professions a meta-analysis. Psychological Science, 25, 1608–1618.Find this resource:
Major, D. A., Turner, J. E., & Fletcher, T. D. (2006). Linking proactive personality and the Big Five to motivation to learn and development activity. Journal of Applied Psychology, 91, 927–935.Find this resource:
Mattingly, V., Kraiger, K., & Huntington, H. (2016, April). Can emotional intelligence be trained? A meta-analytic investigation. Paper presented at the annual conference of the Society for Industrial-Organizational Psychology, Anaheim, CA.Find this resource:
Mayer, R. E. (1979). Twenty years of research on advance organizers: Assimilation theory is still the best predictor of results. Instructional Science, 8, 133–167.Find this resource:
Mayer, R. E. (2008). Applying the science of learning: evidence-based principles for the design of multimedia instruction. American Psychologist, 63, 760–769.Find this resource:
McGehee, W., & Thayer, P. W. (1961). Training in business and industry. New York: Wiley.Find this resource:
Nickols, F. (2005). Why a stakeholder approach to evaluating training. Advances in Developing Human Resources, 7, 121–134.Find this resource:
Noe, R. A. (1986). Trainees’ attributes and attitudes: Neglected influences on training effectiveness. Academy of Management Review, 11, 736–749.Find this resource:
Noe, R. A. (2010). Employee training and development (5th ed.). Boston: Irwin-McGraw.Find this resource:
Noe, R. A., Clarke, A. D. M., & Klein, H. J. (2014). Learning in the twenty-first-century workplace. Annual Review of Organizational Psychology and Organizational Behavior, 1, 245–275.Find this resource:
Noe, R. A., & Colquitt, J. A. (2002). Planning for training impact: Principles of training effectiveness. In K. Kraiger (Ed.), Creating, implementing, and maintaining effective training and development: State-of-the-art lessons for practice (pp. 53–79). San Francisco: Jossey-Bass.Find this resource:
Noe, R. A., & Schmitt, N. (1986). The influence of trainee attitudes on training effectiveness: Test of a model. Personnel Psychology, 39, 497–523.Find this resource:
Park, Y., & Jacobs, R. L. (2011). The influence of investment in workplace learning on learning outcomes and organizational performance. Human Resource Development Quarterly, 22, 437–458.Find this resource:
Pineda-Herrero, P., Belvis, E., Moreno, V., Duran-Bellonch, M. M., & Ucar, X. (2011). Evaluation of training effectiveness in the Spanish health sector. Journal of Workplace Learning, 23, 315–330.Find this resource:
Powell, K. S., & Yalcin, S. (2010). Managerial training effectiveness. Personnel Review, 39, 227–241.Find this resource:
Quiñones, M. A. (1995). Pretraining context effects: Training assignment as feedback. Journal of Applied Psychology, 80, 226–238.Find this resource:
Quiñones, M. A., Ford, J. K., Sego, D. J., & Smith, E. M. (1995). The effects of individual and transfer environment characteristics on the opportunity to perform trained tasks. Training Research Journal, 1, 29–49.Find this resource:
Randolph, J. J., & Edmondson, R. S. (2005). Using the binomial effect size display (BESD) to present the magnitude of effect sizes to the evaluation audience. Practical Assessment Research & Evaluation, 10(14), 1–7.Find this resource:
Robinson, D. G., & Robinson, J. C. (1995). Performance consulting: Moving beyond training. San Francisco: Barrett-Koehler.Find this resource:
Rosenberg, M.J. (1995). Performance technology, performance support, and the future of training: A commentary. Performance Improvement Quarterly, 8, 94–99.Find this resource:
Roszkowski, M. J., & Soven, M. (2010). Did you learn something useful today? An analysis of how perceived utility relates to perceived learning and their predictiveness of satisfaction with training. Performance Improvement Quarterly, 23, 71–91.Find this resource:
Rouiller, J. Z., & Goldstein, I. L. (1993). The relationship between organizational transfer climate and positive transfer of training. Human Resource Development Quarterly, 4, 377–390.Find this resource:
Saks, A. M., & Burke-Smalley, L. A. (2014). Is transfer of training related to firm performance? International Journal of Training and Development, 18, 104–115.Find this resource:
Salas, E., & Cannon-Bowers, J. A. (2001). The science of training: A decade of progress. Annual Review of Psychology, 52, 471–499.Find this resource:
Salas, E., DiazGranados, D., Klein, C., Burke, C. S., Stagl, K. C., Goodwin, G. F., & Halpin, S. M. (2008). Does team training improve team performance? A meta-analysis. Human Factors, 50, 903–933.Find this resource:
Salas, E., Nichols, D. R., & Driskell, J. E. (2007). Testing three team training strategies in intact teams: A meta-analysis. Small Group Research, 38, 471–488.Find this resource:
Salas, E., Tannenbaum, S. I., Kraiger, K., & Smith-Jentsch, K. A. (2012). The science of training and development in organizations: What matters in practice. Psychological Science in the Public Interest, 13, 74–101.Find this resource:
Schmidt, R. A., & Bjork, R. A. (1992). New conceptualizations of practice: Common principles in three paradigms suggest new concepts for training. Psychological Science, 3, 207–217.Find this resource:
Seufert, T., & Brünken, R. (2006). Cognitive load and the format of instructional aids for coherence formation. Applied Cognitive Psychology, 20, 321–331.Find this resource:
Seyler, D. L., Holton III, E. F., Bates, R. A., Burnett, M. F., & Carvalho, M. A. (1998). Factors affecting motivation to transfer training. International Journal of Training and Development, 2, 16.Find this resource:
Sitzmann, T., Bell, B. S., Kraiger, K., & Kanar, A. M. (2009). A multilevel analysis of the effect of prompting self-regulation in technology-delivered instruction. Personnel Psychology, 62, 697–734.Find this resource:
Sitzmann, T., Brown, K.G., Ely, K., & Kraiger, K. (2009). Motivation to learn in a military training curriculum: A longitudinal investigation. Military Psychology, 21, 534–551.Find this resource:
Sitzmann, T., & Ely, K. (2010). Sometimes you need a reminder: The effects of prompting self-regulation on regulatory processes, learning, and attrition. Journal of Applied Psychology, 95, 132–144.Find this resource:
Sitzmann, T., & Ely, K. (2011). A meta-analysis of self-regulated learning in work-related training and educational attainment: what we know and where we need to go. Psychological Bulletin, 137, 421–442.Find this resource:
Sitzmann, T., & Johnson, S. K. (2012). The best laid plans: Examining the conditions under which a planning intervention improves learning and reduces attrition. Journal of Applied Psychology, 97, 967–981.Find this resource:
Sitzmann, T., & Weinhardt, J. M. (2015). Training engagement theory: A multilevel perspective on the effectiveness of work-related training. Journal of Management.Find this resource:
Spitzer, D. R. (2005). Learning effectiveness measurement: a new approach for measuring and managing learning to achieve business results. Advancements in Developing Human Resources, 7, 55–70.Find this resource:
Stone, C. L. (1983). A meta-analysis of advance organizer studies. Journal of Experimental Education, 51, 194–199.Find this resource:
Surface, E. A. (2012). Training needs assessment: Aligning learning and capability with performance requirements and organizational objectives. In M. A. Wilson, W. Bennett, S. Gibson, & G. M. Alliger (Eds.), The handbook of work analysis: The methods, systems, applications and science of work measurement in organizations (pp. 437–462). New York: Routledge Academic.Find this resource:
Taylor, P. J., Russ-Eft, D. F., & Chan, D. W. L. (2005). A meta-analytic review of behavior modeling training. Journal of Applied Psychology, 90, 692–709.Find this resource:
Tharenou, P., Saks, A. M., & Moore, C. (2007). A review and critique of research on training and organizational-level outcomes. Human Resource Management Review, 17, 251–273.Find this resource:
Tracey, J. B., Tannenbaum, S. I., & Kavanagh, M. J. (1995). Applying trained skills on the job: The importance of work environment. Journal of Applied Psychology, 80, 239–252.Find this resource:
Training Magazine (2014). 2014 industry report. Training, 51(11), 16–29. Retrieved from https://trainingmag.com/sites/default/files/magazines/2014_11/2014-Industry-Report.pdfFind this resource:
Tsai, W. C., & Tai, W. T. (2003). Perceived importance as a mediator of the relationship between training assignment and training motivation. Personnel Review, 32, 151–163.Find this resource:
Van Iddekinge, C. H., Ferris, G. R., Perrewé, P. L., Perryman, A. A., Blass, F. R., & Heetderks, T. D. (2009). Effects of selection and training on unit-level performance over time: A latent growth modeling approach. Journal of Applied Psychology, 94, 829–843.Find this resource:
Wallhead, T. L., & Ntoumanis, N. (2004). Effects of a sport education intervention on students’ motivational responses in physical education. Journal of Teaching in Physical Education, 23, 4–18.Find this resource:
Wenger, E. (1998). Communities of practice: Learning, meaning and identity. New York: Cambridge University Press.Find this resource:
Wenger, E., McDermott, R. & Snyder, W. M. (2002). Cultivating communities of practice. Boston: Harvard Business School Press.Find this resource:
Wilson, B. G., Jonassen, D. H., & Cole, P. (1993). Cognitive approaches to instructional design. In G. Piskurich (Ed.), The ASTD handbook of instructional technology, 4, 21.1–21.22.Find this resource:
Winne, P. H. (1996). A metacognitive view of individual differences in self-regulated learning. Learning and Individual Differences, 8, 327–353.Find this resource:
Yang, H., Sackett, P. R., & Arvey, R. D. (1996). Statistical power and cost in training evaluation: Some new considerations. Personnel Psychology, 49, 651–668.Find this resource: