8.10.2011

A Look Behind Robert Gagnè's Nine Steps of Instruction

In her post, Questioning Gagnè and Bloom’s Relevance, Christy Tucker describes how we often get caught up in theories without really looking at whether the research supports those theories. In this post, I would like to point out some of the research and newer findings.

While some think the Nine Steps are iron clad rules, it has been noted at least since 1977 (Good, Brophy, p.200) that the nine steps are “general considerations to be taken into account when designing instruction. Although some steps might need to be rearranged (or might be unnecessary) for certain types of lessons, the general set of considerations provide a good checklist of key design steps.”

1. Gain attention

In the military we called this an interest device—a story or some other vehicle for capturing the learners' attention and helping them to see the importance of learning the tasks at hand. For example, when I was training loading and and unloading trailers with a forklift, I would search the OSHA reports for the latest incidence report on a forklift operator who decapitated themself by sticking their head out of the protective structure of the forklift cage in order to get a better view when entering the trailer and then getting it caught between the bars supporting the forklifts protective top and the side of the trailer (it happens more often than we care to think about). This would become the basis for a story on why they needed to pay attention as the forklift may be small, but it weighs several tons and can easily slice off a limb or another body part if not treated with proper respect.

Wick, Pollock, Jefferson, and Flanagan (2006) describe how research supports extending the interest device into the workplace in order to increase performance when the learners apply they new learnings to the job. This is accomplished by having the learners and their managers discuss what they need to learn and be able to perform when they finish the training. This preclass activity ends in a mutual contract between the learners and managers on what is expected to be achieved from the learning activities (this is also closely related to the next step).

2. Tell learners the learning objective

Marzano (1998, p.94) reported an effect size of 0.97 (which indicates that achievement can be raised by 34 percentile points) when goal specification is used. When students have some control over the learning outcomes, there is an effect size of 1.21 (39 percentile points). This is the beauty of using Wick, Pollock, Jefferson, and Flanagan's mutual contract.

Of course the problem that some trainers and instructional designers run into is telling the learners the Learning Objectives word for word, rather than breaking it down into a less formal statement.

3. Stimulate recall

This is building on prior learning and forms the basis of scaffolding by 1) building on what the learners know, 2) adding more details, hints, information, concepts, feedback, etc. 3) and then allowing the learners to perform on their own. Allan Collins John Seely Brown, and Ann Holum (1991) note that scaffolding is the support the master gives apprentices in carrying out a task. This can range from doing almost the entire task for them to giving occasional hints as to what to do next. Fading is the notion of slowly removing the support, giving the apprentice more and more responsibility.

Part of stimulating recall is having the learners take notes and drawing mind maps. Learning is enhanced by encouraging the use of graphic representations when taking notes (mind or concept maps). While normal note taking has an overall effect size of .99, indicating a percentile gain of 34 points, graphic representations produced a percentile gain in achievement of 39 points (Marzano, 1998). One of the most effective of these techniques is semantic mapping (Toms-Bronosky, 1980) with an effect size of 1.48 (n=1), indicating a percentile gain of 43 points. With this technique, the learner represents the key ideas in a lesson as nodes (circles) with spokes depicting key details emanating from the node.

4. Present the stimulus, content

Implement (nuff said)

5. Provide guidance, relevance, and organization

Kind of redundant as it relates to the other steps.

6. Elicit the learning by demonstrating it (modeling and observational learning)

Albert Bandura noted that observation learning may or may not involve imitation. For example if you see someone driving in front of you hit a pothole, and then you swerve to miss it—you learned from observational learning, not imitation (if you learned from imitation then you would also hit the pothole). What you learned was the information you processed cognitively and then acted upon. Observational learning is much more complex than simple imitation. Bandura's theory is often referred to as social learning theory as it emphasizes the role of vicarious experience (observation) of people impacting people (models). Modeling has several affects on learners:

  • Acquisition - New responses are learned by observing the model.
  • Inhibition - A response that otherwise may be made is changed when the observer sees a model being punished.
  • Disinhibition - A reduction in fear by observing a model's behavior go unpunished in a feared activity.
  • Facilitation - A model elicits from an observer a response that has already been learned.
  • Creativity - Observing several models performing and then adapting a combination of characteristics or styles.

7. Provide feedback on performance

As Christy's post noted, performance and feedback are good.

8. Assess performance, give feedback and reinforcement

Related to above.

9. Enhance retention and transfer to other contexts

We often think of transfer of learning as just being able to apply the new skills and knowledge to the job, but it actually goes beyond that. Transfer of learning is a phenomenon of learning more quickly and developing a deeper understanding of the task if we bring some knowledge or skills from previous learning. Therefore, to produce positive transfer of learning, we need to practice under a variety of conditions. For more information, see Transfer of Learning.

References

Collins, A., Brown, J. S., & Holum, A. (1991). Cognitive apprenticeship: Making thinking visible. American Educator, 6-46.

Good, T. & Brophy, J. (1990). Educational Psychology: A realistic approach. New York: Holt, Rinehart, & Winston.

Marzano, Robert J. (1998). A Theory-Based Meta-Analysis of Research on Instruction. Mid-continent Aurora, Colorado: Regional Educational Laboratory. Retrieved May 2, 2000 from http://www.mcrel.org/products/learning/meta.pdf

Wick, C., Pollock, R., Jefferson, A., Flanagan, R. (2006). Six Disciplines of Breakthrough Learning: How to Turn Training and Development Into Business Results. San Francisco: Pfeiffer

7.07.2011

Andragogy vs. Pedagogy

In his post, Learning is learning, Steve Wheeler asks, “So does the concept of Andragogy add any value to our understanding of learning? For me, the answer is no.”

I would have to disagree because the concept of andragogy has actually added great value to our understanding of learning.

Pedagogy is derived from the Greek words paid meaning “child” and agogus meaning “leader of.” In this pedagogy classroom, the teachers are responsible for all decisions about learning in that they decided what is to be learned, how it is to be learned, when it should be learned, and if it has been learned. Which meant the learners were pretty much in the roles of passive, dependent recipients of the teachers' transmissions. When our public schools were first established, they were based on this pedagogical model.

When adult education was later established, this was the only model at the time, so our profession was also based on it. Which of course lead to a high drop out rate, low motivation, and poor performance. In 1926, Eduard C. Lindereman's book, The Meaning of Adult Education, captures the essence of adult learning:

In this process the teacher finds a new function. He is no longer the oracle who speaks from the platform of authority, but rather the guide, the pointer-out who also participates in learning in proportion to the vitality and relevance of his facts and experiences. In short, my conception of adult education is this: a cooperative venture in nonauthoritarian, informal learning, the chief purpose of which is to discover the meaning of experience; a quest of the mind which digs down to the roots of the preconceptions which formulate our conduct; a technique of learning for adults that makes education coterminous with life and hence elevates living itself to the level of adventurous experiment. - quoted in Nadler, 1984, p.6.4

In the 1950s, European educators started using the term “andragogy,” from the Greek word anere for “adult,” and agogus, “the art and science of helping students to learn.” They wanted to be able to discuss the growing body of knowledge about adult learners in parallel with pedagogy.

Andragogy, is often criticized because as we now know, it also applies to younger learners; however the people behind the theories at the time were trainers of adults rather than educators in the school system, thus they applied their theories to the section of the population that they best knew about. Because of their work, they pioneered the way for the world of pedagogy to also advance itself from being almost entirely passive-based to a more experience-based process of learning.

So yes, Knowles's concept of andragogy is that he intended for it to be different to pedagogy, because pedagogy at the time was extremely passive-based. Just because pedagogy is finally catching up to andragogy is not a strong enough reason to drop the concept from our terminology. I believe we should be embracing the term because of its rich history and pioneering the way of our present concept of learning.

Reference

Nadler, Leonard (1984). The Handbook of Human Resource Development. New York: John Wiley & Sons

6.27.2011

Marching Backwards into the Future

A recent post by Bersin & Associates notes, “Approximately three-quarters of employers globally cite a lack of experience, skills or knowledge as the primary reason for the difficulty filling positions. However, only one in five employers is concentrating on training and development to fill the gap. A mere 6% of employers are working more closely with educational institutions to create curriculums that close knowledge gaps.”

Doh! These same employers slashed their work forces for a total job loss of 8,700,000 jobs since the recession started in December 2007. Only 4,444,000 jobs have been added since then, which leaves a net loss of 425,6000 4,256,000 jobs.

What were they thinking? That they could slash their “most valuable asset” and when the economy picks back up, find the knowledge and skills they require? Yep—short term thinking at its best—and of course it backfired in this complicated/complex work environment.

In a prior post I wrote about the three most important words that managers in an organization must know when it comes to learning—training development, and education.

Training is learning that is provided in order to improve performance on the present job, which means it's orientated towards the present. What these employers should have been thinking is towards the future—what skills and knowledge are we going to need when the economy picks back up? Which means they should have implemented development and education processes.

Development is training people to acquire new horizons, technologies, or viewpoints. It enables leaders to guide their organizations onto new expectations by being proactive rather than reactive. It enables workers to create better products, faster services, and more competitive organizations. It is learning for growth of the individual, but not related to a specific present or future job.

Education in organizations differ from education in schools so don't let the following definition confuse you. Education is training people to do a different job. It is often given to people who have been identified as being promotable, being considered for a new job either lateral or upwards, or to increase their potential.

The past went that-a-way. When faced with a totally new situation, we tend to always to attach ourselves to the objects, to the flavor of the most recent past. We look at the present through a rear-view mirror. We march backwards into the future. - Marshall McLuhan

As we craft our learning processes we must remember the three most important words and ensure that our clients/customers also understand them. Failure to do so will again result in marching backwards into the future.

 

6.07.2011

Five Years later: A Review of Kirschner, Sweller and Clark's Why Minimal Guidance during Instruction Does Not Work

After having a short discussion with Guy Wallace on his blog, I decided to do a review of Kirschner, Sweller and Clark's, Why Minimal Guidance during Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential, and Inquiry-Based Teaching, in which they postulate that students who learn in classrooms with pure-discovery methods and minimal feedback often become lost and frustrated, and their confusion can lead to misconceptions.

The paper caused a bit of a stir in the learning and training community when it was published five years ago, especially among those who lean towards a more constructivist approach. However, while the author's critics did raise some good points, the paper is a good reminder that learning and training professionals often carry new ideals and technologies to the extreme. For example:

  • We had the visual movement from about 1900—1950, which brought us Dale's Cone of Experience. And of course someone had to add some bogus percentages to it to make it more “official.”
  • When VCR's arrived we made training tapes of everything… even if it did not make sense.
  • eLearning was supposed to kill the classroom.
  • Formal and informal learning were supposed to be at odds which each other, even though each hour of formal learning spills over to four-hours of informal learning.
  • All learning is social! Uhh… no. While the majority of learning may be social we often still learn things on our own.

Thus Kirschner, Sweller and Clark's paper is an important reminder for us to not carry Problem Based Learning (PBL) to its extreme. That is, while it has its strengths, learners often need a more direct approach in order to build a solid foundations before being presented with PBL.

With that being said, we do need to take a closer look at the paper. For those that are interested, there is a list of papers that discuss the Direct Instruction versus Constructivism Controversy (they are located at the bottom of the page).

The Title and Paper gives Little Respect to the Constructivism Approach

With the title blaring, “Why Minimal Guidance during Instruction Does Not Work” rather than, “Why Minimal Guidance during Instruction Does Not Work for Novice Learners,” the authors almost seem to ignore that PBL is a necessity in order to promote deeper levels of understanding. They do pay some respect to constructivism, such as:

Higher aptitude students who chose highly structured approaches tended to like them but achieve at a lower level than with less structured versions

Certain aspects of the PBL model should be tailored to the developmental level of the learners… there may be a place for direct instruction on a just-in-time basis. In other words, as students are grappling with a problem and confronted with the need for particular kinds of knowledge, a lecture at the right time may be beneficial.

However, they end up admonishing constructivist:

According to Kyle (1980), scientific inquiry is a systematic and investigative performance ability incorporating unrestrained thinking capabilities after a person has acquired a broad, critical knowledge of the particular subject matter through formal teaching processes. It may not be equated with investigative methods of science teaching, self-instructional teaching techniques and/or open-ended teaching techniques. Educators who confuse the two are guilty of the improper use of inquiry as a paradigm on which to base an instructional strategy.

But it seems, at least to me, they may be doing the same, but only at the opposite end of the continuum. For example, they seem to treat their theories as laws, yet…

Cognitive Load Theory Coming Under Withering Attacks

The paper relies heavily on Cognitive Load Theory, yet we have to realize that it is still a theory rather than a law. Will Thalheimer lists several papers on his site that raises several concerns about Cognitive Load Theory. For example, even though we know that working memory can only hold about seven chucks (which actually may only be four, give or take one), using the old KiSS (Keep it Simple Stupid) principle can be just as effect because trying to count the number of chunks can be quite difficult, if almost impossible. For example, how many chunks are in Rene Descartes statement, “I think, therefore I am?”

Thus, both the authors and the constructivism movement are guilty of jumping on theories before they are fully understood. But why do we do this? Joel Michael writes in Advances in Physiology Education:

…it is important to recognize that educational research is difficult to do; this has been cogently highlighted by Berliner (8) in "Educational research: the hardest science of them all." Berliner points out that unlike a physics experiment, in which it is possible to readily distinguish between the independent and dependent variables, and also possible to isolate and control all of the independent variables, in educational experiments all of this is problematic. Researchers may not agree on which variable is the dependent variable of greatest interest or importance. There may be disagreements about which independent variable(s) are to be manipulated. There may be disagreements about how to measure any of the relevant variables. And, finally, it may be extremely difficult, or even impossible, to isolate and manipulate all the variables suspected of being involved in the phenomena being studied.”

Rather than waiting for eons to pass before all the research is available, we (the learning, training, and educational community) often jump into a new theory because will simply do not want to wait until we are dead and buried before we can fix and/or improve our methodology. With that in mind…

Evidence for Constructivism

Joel Michael continues his discussion for promoting active learning with these two studies:

1. Support for discovery learning is provided by a study in which students engaged in a course that incorporated some discovery learning exercises were tested, and their performance on questions related to topics learned through discovery learning was compared with their performance on questions related to topics learned in lecture (Wilke, Straits, 2001). The authors concluded that performance was better on those topics learned through discovery learning.

2. Burrowes compared learning outcomes in two sections of the same course taught by the same teacher. One section was taught in the traditional teacher-centered manner (control group of 100 students), whereas the other section was taught in a manner that was based on constructivist ideas (experimental group of 104 students). The results of this experiment were striking: the mean exam scores of the experimental group were significantly higher than those of the control group, and students in the experimental group did better on questions that specifically tested their ability to “think like a scientist.” Reference: Burrowes PA. Lord's constructivist model put to a test. Am Biol Teacher 65: 491–502, 2003.

While you can find plenty of other research findings on constructivist methods, the ideal that you can teach learners to “think like scientists” is fascinating because problem solving skills are extremely hard to train. That is, conduct a problem solving course in an organizational setting and you will more than likely get little or no results. It's almost as if the process must be embodied within the discipline.

Embodied Cognition

On the Brain Science Podcast, Ginger Campbell discusses Embodied Cognition with Lawrence Shapiro (both podcast and transcript can be found in the link). They note that in cognitive science, the brain is normally studied while isolated from the world and from the body. While in contrast, embodied cognition imagines not that the brain can be isolated from the body and the environment, but thinks of the body as in some sense shaping, or constraining, or involved in the very processing of the kinds of information that an organism needs to interact successfully with the world.

In the podcast, Dr. Shapiro talks about a fascinating work on the use of gesture. He notes that boys perform better in certain spatial reasoning tasks than girls. When psychologists studied this they've noticed something kind of interesting—boys rely on gestures a lot more than girls do when solving spatial reasoning tasks. Boys use gestures to work out the problem, at the same time they're talking through the problem. And often, they don't synchronize with the verbalizing. It's as if they have two different systems working at the same time—one a gesture system, and one a verbalization system. Gestures seem to be a part of the process of figuring out these spatial reasoning tasks.

They also discuss the study of kittens moving around their environments by pulling carriages with other kittens in them. And the ones in the carriages presumably see everything that the kittens pulling them see, but because they don't actually employ their motor systems to move them around the environment, their perceptual systems don't develop properly. The idea there seems to be that part of what's necessary for perception is actual exploration of the environment; not just being a passive recipient, like these kittens conveyed in the carriages are.

Thus rather than focusing on pieces of conceptually unrelated pieces of information, such as practice first and then learn problem solving second, perhaps we should be focusing our learning processes on entire ideas and concepts whenever possible.

I would interested in your thoughts on the subject by leaving a comment, creating a blog post, or through Twitter (I'm @iOPT).

6.02.2011

Training at its Basic is a Positive Impact Caused by Learning

The WSJ recently ran an informative article, Lessons Learned that discusses the importance of the need to create a workplace environment that actively encourages people to change. They start their article with the statement, “With some studies suggesting that just 10% to 40% of training is ever used on the job, it is clear that a big chunk of the tens of billions of dollars organizations spend annually on staff development is going down the drain.”This is actually a myth as some studies suggest the transfer rate is actually around 60% as the the study the WSJ used is based on extremely faulty research (see Myth - 10% of Training Transfers to the Job). However, 60% is still too low of a transfer rate, thus we know we must use a better method for designing our learning processes.

To ROI or not to ROI?

Looking Backwards in Time

The article notes that an effective post-training follow-up activity is the performance assessment—“When employees know that they are going to be observed and given feedback on their performance, the motivation to use newly learned skills and knowledge increases.” This means you must know what you are going to assess before you design the training, which in turn means the learning process must be based on a goal achieved through backwards planning that will have a positive impact upon the organization.

Does this mean you need to perform a ROI? No. Learning and Training departments normally only need to provide an ROI if they are the initiators of a learning or training process. For example, if you presently outsource your Microsoft Office training program and you want to bring it in-house because you believe you can do it cheaper and better, then you should provide an ROI to show this to be true. If on the other hand you will provide the training for the users of a new computer system, then it is up to the original initiators, normally the IT department, to provide the ROI.

In other cases... it depends. For example if a manager comes to you for a request for training that will eliminate a problem in her department and you determine that training is indeed the answer, then you will have to decide if an ROI is needed or not. In many cases the manager simply wants the problem to go away. And yes, training may be more expensive than solving the problem, but this frees the manager to help grow the organization rather than spend her time putting out fires. You are really investing in her—by eliminating the problem you allow her to spend more time growing the organization, thus the training will pay off in the long run. One rule of thumb is to provide an ROI whenever possible to show that your efforts have a positive financial impact on the organization, but always keep in mind that even if the cost savings are not there in the short turn, it could pay off in the future.

Getting the Impact out of Training with Agile Design

So why you may or may not have an ROI, you still need a positive result or impact. Nadler defines “training” as learning that is provided in order to improve performance on the present job (1984). Most definitions closely follow Nadler on these two points: 1) training always involves learning and 2) performance is improved. Thus training is basically a positive impact caused by learning. If it does not follow these two points then that means you are doing something besides training. It doesn't mean its wrong or right, but it is simply not training.

While the author of the WSJ brings the learners more into the learning and training process, there is still another step to go—include them in the design. Rittel (1972) noted that the best experts with the best knowledge for solving wicked problems are often those affected by the solution; in this case it is the learners themselves. Yet, the only time we normally bring them in is to be guinea pigs for testing our learning process. While you might not be able do bring the entire population of them in on the planning stage, we do need bring in enough learners who will actually represent the population.

This is the heart of Agile Design. The learners are the real stakeholders, even if you or the managers don't agree to what they are saying, you need to listen, guide, and act on their needs and perspectives so that they take ownership of the learning and performance solution. In addition, they gain metalearning and metacognitive skills.

This is true “Learner Design.” Simply designing a learning process for them is andragogy or pedagogy design. True learner designed process involves the learners. Change works best if the people it affects are involved in the process. Learning is no different if you want to change performance on the job. Involve the learners so you not only make them part of the change process, but also as Rittel noted—they are often the experts who can provide good advice.

References

Nadler, Leonard (1984). The Handbook of Human Resource Development. New York: John Wiley & Sons.

Rittel, H. (1972). On the planning crisis: systems analysis of the “first and second generation.” Bedriftsokonomen. No. 8, pp.390-396.

5.23.2011

Creating and Evaluating Informal & Social Learning Processes in a Call Center

I recently received this comment on my post, Using Kirkpatrick's Four Levels to Create and Evaluate Informal & Social Learning Processes:

“What do you do when the "learners" are new hires? And the "environment" is a real-time call center?”

Using the same process as in the last post (shown below), we start off with the main goal or objective:

Kirkpatrick's Backwards Planning and Evaluation Model

1. Results or Impact - What is our goal?
2. Performance - What must the performers do to achieve the goal?
3. Learning - What must they learn to be able to perform?
4. Reaction - What needs to be done to engage the learners/performers?

1. What is our Goal?

Training new hires is normally performed because proficient ones cannot be recruited. However, using training as the only performance solution is not a good choice as it is normally one of the more costly and time-consuming solutions if done correctly. Thus, when formulating your goal, don't think of training as being the solution or goal, but rather what are the benefits you are looking for. For example:

Our goal is to convert interested callers into extremely satisfied and delighted customers. We will achieve this by providing timely, accurate and professional service at each and every customer contact and answering their questions and inquiries in a timely and professional manner.

The benefit of our goal is to maintain/increase customer satisfaction, which will lead to higher sales.

2. What must the performers do to achieve the goal?

While there are several tasks the employees should be able to perform, a few of them that would lead to higher sales are:

  • Greet customers in a timely, cheerful and professional manner
    • The benefit is to jump start the customers' experience from the moment they call
  • Quickly and accurately find product information
    • The benefit is to show our customers that we are professionals who will take care of their needs
  • Understand the culture, mission and policies of the company in order to make wise and timely decisions
    • The benefit is to not only provide customers with our goods and services, but to also show them we can aid them with difficult problems

3. What must they learn to be able to perform?

In this example we need a Learning Environment (not just training) that will enable new hires to perform correctly in our call center so that it can perform its mission:

Star Diagram of the Continua of Learning

Star Diagram of Learning

click image for a larger version

Note: for more information on the above diagram see:

A number of experiences and activities are then designed for the learning environment that will enable the Customer Service Representatives to perform the three tasks:

Task One: Greet customers in a cheerful and professional manner

Social Learning: The learners will discuss with each other what makes a great Customer Service Representative.

Role Play: This activity will be performed in the classroom where the learners will take turns playing customers and Customer Service Representatives. When a learner is role playing the customer, he or she will be provided a number of scenarios that range from a happy to dissatisfied customer (some sample role playing activities may be found here).

Task Two: Quickly find product information

elearning: Explains the company's database and how to find product and service information.

eLearning Branching Scenarios: This course will take the learners through a number of scenarios for finding information that a customer requests.

Informal Learning: The learners are coupled with experienced employees in order to gain real experience.

Social Learning with Social Media: Employees are connected to a micro-blogging service (e.g. Yammer or Twitter) so that they may ask for and pass on information through a social network.

Task Three: Understand the culture, mission, and policies of the company in order to make wise and timely decisions

eLearning Branching Scenarios: This course is an extension of the last eLearning Branching Scenario in that once a learner finds information that customer requests, he or she then has to go through various scenarios to help the customer make a decision.

Informal Learning: The learners are coupled with experienced employees in order to gain real experience.

Social Learning with Social Media: Employees are connected to a micro-blogging service so that they may ask for and pass on decision making techniques through a social network.

Social Learning with Social Media: Employees are provided a blogging platform that will enable them to find and pass on decision making techniques that may require more detailed information than the micro-blogging service allows — the micro-blogging service is for quick and short bursts of information while the blog is for more complex and detailed information.

Wiki: For storing and retrieving lessons learned.

Note that the learning platform may start with traditional classroom training, but it is blended with elearning and informal learning. In addition it is transformed into a true learning process, rather than an event, in that it is implemented into their daily work flow so that they can continue to not only learn, but help others learn.

4. What needs to be done to engage the learners/performers?

Before the learners enter the learning environment, each learner's respective manager will ensure that the learner understands the importance of the training they are about to receive. In addition, the learners and managers will set goals and discuss potential problems. After the initial elearning and classroom learning programs are completed, the manager will follow-up with the learners and assign them coaches/mentors and follow their progress on a weekly basis.

Evaluating the Learning Platform

Since we know precisely what each part of the Learning Platform was designed to perform, our task of evaluating the program becomes much easier:

Kirkpatrick's Backwards Planning and Evaluation Model

1. Results or Impact (What is our goal?)

Did we achieve higher customer satisfaction? This can also be tied to a hard ROI , such as an increase in sales.

2. Performance (What must the performers do to achieve the goal?)

Can the employees now perform as expected?

3. Learning (What must they learn to be able to perform?)

This is assessed in the elearning programs and discussions with the experienced employees involved in the informal learning sessions.

4. Reaction (What needs to be done to engage the learners/performers?)

The learners managers can provided input on the level of the learner's reaction and engagement of the learning platform.

This backwards planning process can help you pinpoint problems. For example, let's say that you do not get an increase in sales. That means to go back one step and see if the learners are performing as desired. If they are, then that means your initial premise was wrong (greater customer satisfaction does not lead to higher sales) or something else is preventing it, such as your product is priced too high.

On the other hand, if they are not performing as desired, then you have to evaluate the working environment to determine if something is preventing the learners from using their skills, such as processes that are counter-productive to great customer service. If you determine that they should be able to perform, then evaluate the learners to see if they can perform or if something in the learning process is preventing them from learning, such as not enough practice time.

If the learning process is sound, then go back one more step and determine if the learners are engaged. That is, do the have the basic skills that will allow them to master the learning program and/or do they have the motivation and desire to complete the learning program (maybe they see it as a waste of time).

2.22.2011

Using Kirkpatrick's Four Levels to Create and Evaluate Informal & Social Learning Processes

In my last post, The Tools of Our Craft, I wrote how the Four Level Evaluation model is best used by flipping it upside down and that it can be used to evaluate informal and social learning. In this post, I want to expand on the second point—evaluating informal and social learning.

Backwards Planning and Evaluation Model

1. Results or Impact - What is our goal?
2. Performance - What must the performers do to achieve the goal?
3. Learning - What must they learn to be able to perform?
4. Reaction - What needs to be done to engage the learners/performers?

Inherent in the idea of evaluation is “value.” That is, when we evaluate something we are trying to make a judgment about the worth of it. The measurements we obtain gives us information to help base our judgment on. This is the real value of Kirkpatrick's Four Level Evaluation model as it allows us to take a number of measurements throughout the life span of learning process in order to place a value on it, thus it is a process-based solution rather than an event-based solution.

Each stakeholder will normally only use a couple of the levels when making their evaluation, except for the Learning Department. For example, top executives are normally only interested in the first one—Results, as it directly affects the bottom-line. Some are also interested in the last one, Reaction, as they are interested in the engagement aspect—are the employees engaged in their job? Managers and supervisors are normally most interested in the top two levels, Results and Performance, and somewhat in the last one, Reaction. While the Learning Department needs all four to properly deliver and evaluate the learning process.

Note that this post uses as actual problem that is based on informal and social learning for the solution. I wrote about part of it in Strategies for Creating Informal Learning Environments, thus you might want to read the first half of it (you can stop when it comes to the section on OODA).

Results

Implementing a learning process is based on what results or goals you are trying to achieve—and identifying what measurements you need to help evaluate your results will help you to zero in on identify the result or goal you are trying to achieve. For example, saying that you want your employees to quickly find information is normally a poor goal to shoot for as it is hard to measure. Starting with a focused project and then letting demand drive additional initiatives is normally the best way to start implementing social and informal learning processes.

If you find that you are unable to come up with a good measurement, that normally means you have not zeroed in on a viable goal. In that case, use the Japanese method of asking “Why?” five times or until you are able to pinpoint the exact goal you are trying to achieve. Establishing new or improving learning/training processes normally begin with a problem, for example, a manager complains that when he reads the monthly project reports he finds that employees are often faced with the same problems as others and in turn, repeat the same learning process again, thus the same mistakes are repeated throughout the organization.”

“Why?”

“No one realizes that others within the organization have had the same problem before and have normally documented their solution (Lesson Learned).”

“Why?”

“There is no central database for them to look and the people who work next to them are normally unable to help.”

Zeroing in on the actual cause of the problem helps you to build a focused program, in this example, it's a central database for “Lessons Learned” and a means of connecting the people within the organization to see if anyone has been faced with the same problem (and vice versa—allowing people to tweet (broadcast) their problems and their solution that may be of help to others).

In addition, you now have a viable measurement—counting the number of problems/mistakes in the project reports each month to see if they improve.

Performance

In a normal training situation, performance on the job is normally easily evaluated. For example, when I instructed forklifts operations in a manufacturing plant, after the training/practice period we would assess the learners performing on the forklifts in the actual work environment to ensure they could operate safely and correctly under actual working conditions.

In addition, I used to train users to use Query/400 (a programming language to extract information from a company's computer system). One of the methods we used to assess the performers was that when they returned to their workplace, we required them to build three queries that were assessed by someone from the training department to ensure they could perform on the job. Thus the training is transformed from an event to a process by ensuring their skills are carried over to the workplace.

However, in our working example, it would be hard to observe the entire “Lessons Learned” process as it is a three-prong solution that uses informal and social learning:

  • Capture the Lessons Learned by using an After Action Review (AAR)
  • Store it in a social site (such as a wiki or SharePoint) for easy retrieval
  • Provide a microblogging tool, such as Yammer, to help others to ask about lessons learned that might pertain to their problems and to tweet lessons learned

The first part of the solution could somewhat be evaluated by observing some of the AARs and watching the informal learning taking place as they discuss their problems and solutions, however, the second and third points would be difficult as it would be hard to sit at someone's desk all day to see if they are using the wiki and microblogging tools. While there are probably a number of solutions, one method is to identify approximately how often the social media tools should be used on a daily, weekly or monthly basis and then determine if expectations are being met by counting the number of:

  • contributions per month to the wiki (based on their Lessons Learned in the AAR sessions)
  • contributions to the microblogging tool (short briefs on their Lessons Learned)
  • questions asked on the microblogging tool by employees who could not find a Lesson Learned in the wiki that matched their problem

The approximations are based on the number of problems/mistakes found in the project reports and the total number submitted. You might have to adjust your expectations as the process continues, but it does give you a method for measuring the performance. Note that Tim Wiering has a recent blog post on this method in the Green Chameleon blog.

In addition, once the performers have started using the tools, you can interview them by asking how the new tools are helping them and then capture some of the real success stories, such as videotaping them or using a question and answer interview and then blogging about it. These stories have a two-fold purpose:

  • The stories themselves are evidence of the success of the performance solution.
  • The stories can be then be used to help other learners/performers to use the new tools in a more effective manner as stories carry a lot of power in a learning process because the learners are able to identify and relate to them.

Learning

First, the purpose of this level is not to evaluate what the performers are learning through the AARs, microblogging, and wiki tools when they return to their job (that measurement is captured in the Performance Evaluation), but rather what they need to learn so that they can use the tools on the job. Look around almost any organization and you see processes, programs, tools, etc. that were built on the idea that if we build it, they will come, but are now wastelands because the performers saw no use for them and/or had no real idea how to use them. Just because a tool, such as Yammer or Twitter may be obvious to you, does not mean the intended performers will see a use for it, and for that matter, know how to use it.

In addition, while one organization may not care if someone sends an occasional tweet about the latest Lady Gaga video, another may frown on it, so ensure the intended performers also know what not to use the new tools for.

Since these learning programs can be elearning, classroom, social, informal, etc. and the majority of Learning Designers know how to build and evaluate them, I'm not going to delve into that in this post.

Reaction Engagement

While this may be the last level when flipping Kirkpatrick's Evaluation Model, it is actually the foundation of the other three levels.

One of the mistakes Kirkpatrick made is putting too much emphasis on smiley sheets. As noted in the excellent article, Are You Too Nice to Train?, measuring reaction is mostly a waste of time. What we really want to know is how engaged the learners will be in the learning level and will that engagement carry through to the performance level. People don't care so much about how happy they are with a learning process, but rather how will the new skills and knowledge be of use to them?

For example, when I was stationed in Germany while in the Army we trained on how to protect ourselves and perform during CBR (Chemical/Biological/Radiological) attacks. One of the learning processes was to don our CBR gear (heavy clothing lined with charcoal to absorb the chemical and biological agents, rubber gloves, rubber boots, the full-face rubber protective mask, and of course our helmets to protect our heads) in the midday heat of the summer time and then using a compass and map, move as fast as we could on foot to a given location about two miles away. And I can tell you from experience, this is absolutely no fun at all, yet we learned to do it because no one wants to die from a chemical or biological agent—a ghastly way to go. Thus the training had us totally engaged even though the training was absolutely horrible.

Thus the purpose of this phase is to ensure the learner's are on board with the learning and performance process, which is often best accomplished by ensuring a portion of them and their managers are included in the planning process. You need the managers to help ensure they are on board as employees most often do what their managers emphasize (unless you have some strong informal leaders among them).

Reversing the Process

By using the four levels to build the learning/performance process (going through levels 1 to 4 in the chart below), it is now relative easy to evaluate the program by reversing the process (going through the levels in reverse order [4, 3, 2, 1]:

 

Evaluation Level Create

Measurement/

1. Results or Impact - What is our goal?

Implement a process that allows the employees to capture Lessons Learned so that others may also learn from them when similar problems arise.

Reduce number of repeated problems/mistakes in the project reports by 90%.
2. Performance - What must the performers do to achieve the goal?

Identify and capture “Lessons Learned” in an AAR, post them on a wiki, and tweet them using Yammer.

When problems in their projects arise, they should be able to search the wiki and/or use Yammer to see if there is a previous solution.

Count the :
  • contributions per month to the wiki
  • contributions to the microblogging tool
  • questions asked on Yammer

Interview performers to capture success stories.

3. Learning - What must they learn to be able to perform?

Perform an AAR.

Upload the captured “Lessons Learned” to a wiki.

Search and find documents in a wiki that are similar to their problem.

Microblog in Yammer.

Proficient use of an AAR is measured by using Branching Scenarios in an elearning program and performing an actual AAR in a classroom environment.

The proficient use of the wiki and Yammer are measured in their respective elearning program (multiple choice) and by interacting (social learning) with the instructor and other learners on Yammer.

4. Reaction - What needs to be done to engage the learners/performers?

Bring learners in on the planning/building process to ensure it meets their needs.

Managers will meet with the learners on a one-on-one basis before they begin the learning process to ensure the program is relevant to their needs.

The instructional staff will meet with the learners during the learning process to ensure it is meeting their needs.

The managers, with help from the learning department, will meet with the performers to ensure the new process is not conflicting with their daily working environment.

 

Learner/performer engagement problems/roadblocks that are encountered will be the first item discussed and solved during the weekly project meetings.

Since we started with a focused project, we can now let demand drive additional initiatives that expand upon the present social and informal learning platform.

How do you build and measure learning processes?

2.13.2011

The Tools of Our Craft

The latest edition of Chief Learning Officer magazine contains an interesting article, Time's Up (you can also read the article on the author's blog). It is about Donald Kirkpatrick's Four Level Evaluation Model that was first published in a series of articles in 1959 in the Journal of American Society of Training Directors (now known as T+D magazine).

The author, Dan Pontefract, sums up the article in his last statement, “Diverging from the cockroach, it's time for the learning profession to evolve.” While Dan's article is thought-provoking, I believe it misses out on two points, 1) since it is old, it's no good and 2) it has not evolved.

Old Does Not Mean Outdated

Interaction Design is closely related to our craft, training and learning. While the concept of interaction design has been around for ages, it was not formerly defined until 1990 by Bill Moggridge, co-founder of the Silicon Valley-based design firm IDEO. While it is one of the newer design professions, it still relies on older tools. For example, one of the tools used is the affinity diagram that was developed by Jiro Kawakita in the early 1960s. Thus, being old does not mean a tool should be extinct. If that was true, the cockroach would have disappeared millions of years ago, yet, because it has evolved, it has managed to survive... much to the disgust of anyone who has had their home invaded by them.

Enso

enso circle by Vibhav
“Nature itself is full of beauty and harmonious relationships that are asymmetrical yet balanced. This is a dynamic beauty that attracts and engages.” - Garr Reynolds

While people who have had their homes invaded by the cockroach look at them in disgust, entomologists look at them as one of the marvels of natures. Learning/Instructional Designers should not look upon our tools, such as ADDIE and Kirkpatrick's model as disgusting objects that have invaded our craft, but rather more as entomologists look upon the lowly cockroach—marvels of our craft that have survived the test of time.

The Evolution of Our Tools

Just as the ADDIE model has evolved over time, Kirkpatrick's model has also evolved. One of its main evolutionary steps was flipping it into a backwards planning model:

  1. Results or Impact - What is our goal?
  2. Performance - What must the performers do to achieve the goal?
  3. Learning - What must they learn to perform?
  4. Reaction - What needs to be done to engage the learners/performers?

While I blogged of this in 2008, Flipping Kirkpatrick, Kirkpatrick himself wrote of this several years earlier. This method align's with how Dan's article says we should start, “start with an end goal to achieve overall return on performance and engagement.” In addition, this in no way treats learning as an event, but rather a process. What is interesting, is how closely Kirkpatrick's evolved model fits in with other models, such as Cathy Moore's Action Mapping.

Like the ADDIE model, Kirkpatrick's model is often called a process model. However, this is only true if you blindly follow it. If you remove your blinders and study and play with it, it becomes a way to not only implement formal learning, but informal, social, and nonformal learning as well. For example, in step three, What must they learn to perform?, does not imply strictly formal learning methods, but rather any combination of the four learning processes (social, informal, nonformal, and formal).

How do you see our tools evolving?

 

1.02.2011

Star Diagram of the Continua of Learning

In my last post, Should the Door be Closed or Open, Nick Kearney commented that the star diagram was a better representation than the various continua I laid out. I agree; however, since the star diagram is composed of continua, I think when discussing a particular one, as I did in the last post, it helps to just show the one being discussed.

As show in the diagram below, I did some adjustments to it (you can click the diagram for a larger version):

Star Diagram of the Continua of Learning

Star Diagram of Learning

As I noted in my last post, I put social learning and reflection on the same continuum (The Door), as the real purpose is that sometimes we need to be alone with our thoughts, while at other times we need to interact with others. And of course there are a lot of alternatives between the two (social reflection being one of them).

I also dropped the Purpose of Learning Continuum (intentional and incidental/serendipitous). While it is an interesting concept, I don't believe that it fits in with the diagram in that it does not help us to design better learning/performance platforms.

David Winter (@davidawinter) tweeted me with the suggestion of adding another continuum: Impact - 'reinforcement/augmentation' of existing understanding/behaviours/identity vs 'transformation.' After thinking about it, I believe it belongs on the Workflow Continuum (EPSS/performance support and training). However, I'm not sure which of the terms are better. I'm thinking that it should be called the 'Workflow Continuum' with augmentation on one end and transformation on the other. I believe that EPSS/performance support and training would be some of the options that lie between the two:

Workflow

What are your thoughts?

12.30.2010

Should the Door be Closed or Open: Thoughts on the Social Learning and Reflection Continuum

I have been doing some more reflecting on the Social Learning and Reflection Continuum. This reflection includes both social reflection (mostly via Twitter) and self-reflection. Part of it includes how the continuum fits in with other continuums and does the Social Learning and Reflection Continuum really make sense? I created the diagram below to see what would happen if I changed it and how other forms of learning fit in. If you click the diagram it will bring up a larger diagram in a new window:

Hydra Theory of Learning

In the diagram I replaced the Social Learning and Reflection Continuum with Social Learning and Achorite. This was mostly because of some tweets with Marcia Conner (@marciamarcia) where she thought that Solitude should be on the opposite side of Social Learning. I did not really like that word so I used the term anchorite, which in part means “rural countryside” in order to contrast it to social, as meaning village or town” (e.g. it takes a village or number or people).

NOTE: While the opposite ends of the various continuums discussed below may be a choice of one or the other, they are more often shades of various degrees and/or combinations. That is, don't just look at the ends, but also picture a wide rage of possibilities inside of them.

The Door

Door: Open or Shut?

I had the thought, “What is the real purpose of the continuum? Is it to count the number of people involved? Or does it have a deeper meaning.” Then I remembered of what I read when the physicist Freeman Dyson commented on the subject. He noted that when writing, he closes the door, but when doing science, he leaves it open. This is because when writing you need to perform deep reflective thoughts, but when doing science you welcome being interrupted because it is only by interacting with other people that you get anything interesting done.

Thus it is not really about the number of people involved in the learning episode, but rather, do you welcome the thoughts of others or do you need to sort out your thoughts and ideas without being interrupted? Counting the number of people in a learning episode does not make sense as it sort of like counting the number of seats in formal learning — who cares? The real purpose is do you need to discuss ideas with others or do you need to sort ideas, order them, toss out invalid ones, etc. within your own mind.

So I named this continuum “the door” – do you need it open or shut during a specific point when learning? And of course you may decide to choose a combination and have Social Reflection — engaging with others in a way that encourages talking with, questioning, or confronting, to aid the reflective process by placing the learner in a safe environment in which self-revelation can take place.

Thus, I am now sticking with my initial premise that reflection belongs on the same continuum with social learning.

Direction of Control

Direction of Control

David Winter (@davidawinter) thought perhaps that Autonomous should be placed opposite of Social Learning. At first I thought his term was better than Achorite or Solitude. But then I thought some more and decided that Guided Learning was really its opposite, thus in the first diagram above I have them on their own continuum. But when I started to name them, it dawned on me that they had the same purpose as the Formal and Informal Continuum — who controls the learning? Since Autonomous and Guided Learning has slightly more precise meanings than Formal and Informal, I placed them on the inside of the Direction of Control Continuum.

Known or Unknown?

Known or unknown answer

Collaborative Learning is quite similar to cooperative learning in that the learners work together in teams to increase their chance of deeper learning. However, it is a more radical departure from cooperative learning in that there is not necessarily a known answer. For example, trying to determine the answer to "how effective is reflection?" would be collaborative learning as there are a wide ranges of possibilities to this question, depending upon the learners' experiences and perspectives.

Purpose of Learning

Purpose of Learning

I included a Purpose of Learning Continuum as learning normally has a purpose during an informal or formal episode, but often we learn something that was not in the initial plan. However, that learning may prove later to have a real and important purpose.

Processing

Type of Process

The processing continuum is important because it determines how we will learn something. If it is easy to learn, we may only have to listen, observe, feel, etc. But as it becomes more complicated, we need to actually do it. Of course that is not always possible, so between the two ends of the continuum are the various activities that we may practice in order to be able to perform in a real work setting.

Workflow

Workflow

While writing this post, I thought of another learning continuum, workflow — can the learning be embedded within the learner's workflow or does it call for a training process?

Thoughts?

What are your thoughts on these various learning continuums? Are there more? Do these make sense? Please let me know by leaving a comment, Twitter me (I'm @iOPT), or carrying the discussion further on your own blog (send me a tweet so I can RT it).