By Cathy Moore
Here's a flowchart that will help you identify the best solution to a performance problem, whether it's a job aid, a workflow improvement, training, or something else. It's based on action mapping, my streamlined approach to instructional design.
First, download the flowchart. It's available as a PDF in pretty and plain versions.
Pretty A4
Pretty US letter
Plain version (the one shown in the video)
You can also get the PDF flowchart with two other job aids, all designed to work together.
Or use the interactive, web-friendly version. You can translate it with a translation plugin.
Then consider watching the following 8-minute video, which walks you through a short discussion with a client, showing you how some quick questions can save you days of unnecessary training development.
Do you really need to design training? Ask this flowchart. from Cathy Moore on Vimeo.
So far, thanks to our questions, the client has identified ways to:
These are permanent workflow improvements that avoid the need for training. At this point, the only training we're going to develop is a very compact activity on identifying last names. It could probably be posted on the intranet with a link sent to everyone through email.
If we hadn't used the flowchart and had simply obeyed our client's request for training, we'd spend a lot more time developing something a lot less useful. We'd probably create an online course that starts with "Welcome to the course on completing TPS records." We'd list objectives like, "At the completion of this course, you will be able to enter the correct XR code..." We'd probably "motivate" the learners by talking about the importance of completing the record properly and describing the costs of having our records rejected.
Then we'd tell people what they already know -- that they have to log in to the annoying server to see the XR codes. We'd probably walk them through it "to make sure everyone knows how" and lecture them on the importance of using the updated sheet.
To "teach" the rules for flagging records, we'd probably display a chart of rules, give some examples, and then quiz the learners on whether they can remember the information that they saw five seconds ago and which they will forget by tomorrow if not later today. Finally, we'd include a little activity to help them practice identifying last names.
Within a month, we'd discover that people are still printing out the XR code sheet and failing to flag records properly.
Instead, just by asking some questions, we've helped the client identify permanent improvements, and we've freed up enough time to do a good job on the little name activity. The time that we don't spend on creating unnecessary training becomes time we can invest on designing much higher quality activities.
By Cathy Moore
Do your clients expect you to create training on demand? By changing how you talk to them, you can steer them away from an information dump and help them solve the real problem.
Once they agree to do a needs analysis, you're on your way to the best solution.
Here's a quick slideshow with ideas for things you could say -- and not say.
For more, see:
Stop being an order taker and help your clients solve the real problem. The Partner from the Start toolkit helps you change how you talk to stakeholders, find the real causes of the problem, and determine what type of training (if any!) will help.
By Cathy Moore
"Training will help solve this problem."
Before you say this, make sure more powerful changes have been made first.
Here are some factors to consider. Start at the top and confirm that each statement is correct before you decide, at step 7, whether training will help.
It's like the marketing funnel. We pour a bunch of problems in at the top. Some problems are solved at each level, until a few problems make it to the narrowest bit, where training might be appropriate.
The funnel works at a high level: It will help you determine if the entire idea of training is worth considering.
If it is, then you can set your goal for the project, list what people need to do on the job to reach the goal, and analyze those tasks, as described in the (recently expanded!) Partner from the Start toolkit and in my book.
The "Will training help?" flowchart comes in later, during the analysis. It helps you examine individual actions, one at a time. For example, it will help you see if a checklist could be something to try to help people wrangle left-handed widgets. You run through the flowchart multiple times, once for each high-priority action.
I've heard from some designers that they use the flowchart as an initial vetting tool. They run through it once for the entire project or have the client do it on their own. The flowchart isn't designed for that. It's too easy for a training-obsessed person to bend it to their will.
If you want to quickly vet the proposal before you get into analysis, the funnel questions would be more appropriate.
By Cathy Moore
Good fiction writers "show, don't tell" so their scenes seem real. What does that technique actually look like, and how can we apply it to scenario-based training?
Here are some tips from the free scenario writing style toolkit.
Don't describe what people say. Have them actually say it.
In the examples below, which can you more easily picture? Why?
Telling | Showing |
---|---|
Steve says he's concerned about the delay in processing TPS reports. | “It takes too long to process TPS reports,” Steve says. |
Javier is worried about an employee who might be using drugs. | "I'm worried about one of my team members," Javier says. "I think he might be using drugs." |
Clara has been talking with the IT director at Acme and is excited to report that he's asked for a quote. | "Great news!" Clara says. "The IT director at Acme has asked for a quote." |
Once you put words in quotation marks, you naturally start "showing" rather than telling in other ways, too -- like the following.
Imagine you're watching a film. How do you know what a character is thinking or feeling? By what they say and do. No narrator explains it to you.
Try to create the same experience for your scenario players. Don't describe what people are thinking or feeling. Instead, have them show it.
Telling | Showing |
---|---|
Paolo doesn't want to talk about what happened at his previous job. | "What happened at your previous job, Paolo?" Sara says. "Who wants another coffee?" Paolo says. "I'm going for a refill." |
Mrs. Turanli doesn't want to join the other residents at breakfast and seems depressed. | "Are you coming down to breakfast?" you ask Mrs. Turanli. "No, thank you," she says. "Why not? You usually love breakfast." "I don't know," she says. She's staring at the television, which is off. "I don't feel like it." |
"Showing" often requires more text, and that worries some stakeholders. But it's more interesting text, because it lets the reader draw conclusions.
We're also simulating the real world, where no narrator tells you what to think. Realistic practice is more likely to be transferred to the job.
If you're putting the player in the scenario as "you," don't tell them what to think or feel. It not only feels fake, it pushes the player toward a decision that they should take on their own.
Brenda has missed three out of the last four meetings. You wonder if she's losing interest in the project.
In two days, you have to give the Wonder Widget presentation to 600 government officials. You're nervous because you haven't looked at the presentation in two years, and now you can't find it.
You've left three voicemails for Simon, but he still hasn't called back. You suspect he's avoiding you.
The free mini-toolkit on writing style has you practice rewriting "telling" into "showing." It also gives tips and hands-on practice with:
By Cathy Moore
“Why” is a useful question when you’re getting to the root of a performance problem. But it has a more powerful cousin: “What for?” Here’s an example.
Your client, Joe, wants ethics training.
“People are lying when they file their performance reports,” he says. “They say they did more work than they actually completed. So I want you to create a one-hour course on ethics.”
You could obey and make grownups sit through a course that tells them it’s wrong to lie, or you could examine the problem more closely. Let’s examine the problem, and start with “why.”
“Why are people lying on their performance reports?” you ask.
“Because they don’t realize how important it is to be honest,” Joe says. “That’s why they need a course on ethics.”
Joe has answered your question in good faith, but you haven’t made any progress. Joe hasn’t seen any reason to doubt his assumptions.
Let’s rewind and try again, this time with “what for?”
“What do they lie for?” you ask.
“They want to look good even when they can’t do the work,” Joe says. “We have a competitive culture.”
Now we’re getting somewhere. Our next question can be “why” because we’ve steered Joe down a better path.
“Why can’t they do the work?” we ask.
“They have too much on their plates,” Joe says. “And if they ask for help, their managers tell them to stop whining.”
Obviously, a one-hour course on ethics will do nothing for Joe. And now he’s willing to look more deeply at the problem to find a better solution.
This is a variation on an example from the Partner from the Start toolkit, which helps you stop being an order-taker and steer clients toward the most effective solutions.
“Our patients complain that nurses are abrupt,” our client says. “The nurses just do their tasks quickly and leave. So we need a course on bedside manner.”
Why?
Q. Why are the nurses abrupt?
A. Because they don’t understand the importance of bedside manner. That’s why we need a course.
What for?
Q. What are the nurses abrupt for?
A. They don’t want to get drawn into a conversation with the patient, because they have to hurry to the next one. We’re understaffed.
There’s a solution to this problem, and it isn’t a course on bedside manner.
We analyze performance problems to see if training really is part of the solution. If you’re asking “why” and hearing about alleged ignorance (“they don’t understand,” “they don’t think it’s important”), clients will continue to assume that training is the solution. To have them look deeper, try “what for?”
Discuss this post on LinkedIn.
By Cathy Moore
Need to write a scenario question? Get ideas from these three classic blog posts that you might have missed.
What's the difference between a quiz question, a mini-scenario with poor feedback, and a strong scenario question? Compare these versions of the same data-security question and discover an unusual use for a Chipmunks CD.
I've just made a decision in your scenario. Should you now tell me what I did right or wrong, or just show me the consequence of my decision?
I vote for "show" because I want to use my brain. Here's an example of how it works, plus an early encounter with the Omniscient One, that faceless know-it-all who dominates elearning.
These tips will make your scenario characters believable, relatable, and concise. (The most common mistake I see is too much small talk. See tip #2 for the solution.)
My four-week online scenario design course gives you hands-on practice writing scenarios for training, plus my personal feedback on your work. Learn more.
By Cathy Moore
I’ve added several branching scenarios to this collection of scenario-based training examples, along with questions to help you evaluate the designers’ choices.
The new examples include five activities you might not have seen before:
You’ll consider several design decisions, including:
By Cathy Moore
"Our job is to give the client what they want." Sound familiar?
It's what I was told when I started. But decades later, I'd say this instead:
"Our job is to make the client look good."
Often this means, "Our job is to save our clients from themselves."
Which manager looks good? The one who helps staff do their work well and feel proud of it, or the one who makes everyone sit through a zombie presentation followed by a quiz that a garden gnome could pass?
If we want to make our clients look good, we can't just give them what they think they want.
"Give me a heart transplant, and use this knife to do it." No surgeon would agree to this. It goes against their ethics.
"Make me a course, and use this tool to do it." Like a surgeon, we should have ethics. We should at least vow to do no harm.
Creating a course and making everyone sit through it does harm when it doesn't solve the problem. We not only waste money and time, we disrespect the client, employees, organization, and our own profession.
Our goal should be to leave our clients in better shape than when they came to us. The only way to do this is to diagnose their problem and help them solve it. They'll look good, and we'll be heroes.
To know what will make our client look good, we need to know who they are and what challenges they're facing. For example, we could:
For example, in the Partner from the Start toolkit (now available), we have a practice client called Carla. She wants an online course for managers. The course is supposed to teach them how to use the (fictional, but only barely) Soto-Baldwin personality inventory to "become more empathetic."
You could obey and crank out the course. But if you spend a few minutes learning more about Carla, you discover that going ahead with this idea would damage an important relationship. You'd also waste everyone's time with a dubious personality test. And, of course, is "be more empathetic" really the solution? What's the actual problem?
Your challenge is to help Carla see all of this for herself.
Once we understand where the client is coming from, we can help them analyze the problem.
Even if you can only invest two hours, you'll have a chance to steer your client in the right direction.
These first steps with the client are the focus of the Partner from the Start toolkit. It gives you the direction I wish I had when I first started in this field.
You'll change how you talk to stakeholders so you can help them solve problems and improve lives. You’ll stop being an order taker and move toward performance consulting.
You'll have tons of realistic practice and a unique system of real-world tasks. The 30 tasks have you improve the forms you use, write what you'll say in meetings, practice with colleagues, and establish procedures to permanently change how you work.
You can also use the toolkit as a mega-job aid as you start a new project. In each section, you can write notes in the toolkit, recording how you'll apply the techniques to your current client. You can download these notes as a custom PDF. Later, you can reuse the notes fields and download new PDFs as many times as you want for new projects.
Check out the toolkit here!
Discuss this post on LinkedIn.
Photo credit: New Jersey National Guard Flickr via Compfight cc
By Cathy Moore
If your organization is typical, you have a training request form. Look at it now. It probably commits 10,000 sins.
For example, it might ask the client to:
With this form, you’re saying, “My job is to produce whatever training you want, whether or not it will actually work.” It turns you into a worker in a course factory.
If you want to have a real impact and win the respect of your organization, you need to set your clients’ expectations from the start.
If you must have a form, call it something like “development request.” Make clear your job is to improve performance, not create training on demand.
Throughout the form, avoid terms that refer to a specific solution. There is no solution yet. You won’t decide whether training is part of the solution until you’ve analyzed the problem.
For example, don’t use these terms in the form:
Ask about the issue that the client is seeing. You might use questions like these:
Your goal is to get an idea of the possible business goal and how the client currently views the problem. Both could change during your discussions with the client.
A development request form that you can adapt is now available as part of an online toolkit.
The Partner from the Start toolkit is a menu-driven series of challenges, guidance, downloads, and real-world tasks that will help you start projects right and avoid creating information dumps.
Discuss this post on LinkedIn.
By Cathy Moore
“Never design anything without first writing the learning objectives.”
We all know this. It’s a useful rule, but only when the objectives are useful.
And there’s the problem — conventional learning objectives can work against us. They’re our friends, but not always.
What do I mean by “conventional learning objectives?” This sort of thing:
Here are three questions that will help you set boundaries with our frenemy.
Conventional learning objectives might be your friends if both of the following are true.
Is there really a knowledge or skill gap? Maybe the problem is mostly caused by bad tools, an inefficient process, lack of incentives, or some other environmental issue. With your client and SME, first identify what people need to do on the job, and then walk through this flowchart before agreeing to develop training.
Will closing the gap solve the problem? Maybe it’s true that people don’t know the intricacies of the supply chain, but installing that information in their brains won’t make them better widget polishers. Don’t deliver content just because someone told you to.
If our analysis shows that we really do need to design a learning experience, then, yes, we need objectives. Are the actions we wrote earlier good enough, or should we let learning objectives elbow their way into our project?
Here’s an example from my book.
Let’s say that we want firefighters to educate the public about preventing forest fires and quickly put these fires out when they occur. Our official goal is, “Fire-related losses in our region will decrease 10% by next year as firefighters prevent and extinguish brush and forest fires.”
Which of the following do you think I’d accept as actions to reach this goal?
a) Identify the techniques used to extinguish a brush fire
b) List the principal sources of combustion in a deciduous forest
c) Describe common public misconceptions about campfires
d) Quickly extinguish a brush fire in a dry mixed forest
e) Define “incendiary device”
If you said that only d, “Quickly extinguish a brush fire,” was an action by my standards, you’ve gotten my point.
An action is something you see someone do as a normal part of their job. It doesn’t take place in their head or in that abstract world I call Testland. The action should be the focus of our analysis and design, and it should be the statement we use to “sell” the material to the stakeholders and learners.
In the world of conventional instructional design, the other statements are also observable objectives.
For example, we can watch a firefighter write a list of the techniques used to extinguish a brush fire, and we can point at that list and say, “See? They know it.” And that’s the problem — we’re just measuring whether they know it. There’s no guarantee that the firefighter will actually apply this knowledge, which is what we really want and what we should be helping them do.
“Identify the techniques” is an enabling objective. It describes information necessary to perform the action. It goes in the information part of the map — I’d list “techniques to extinguish a brush fire” as required knowledge that’s subordinate to the action about putting out fires.
Our goal is to create realistic, contextual practice activities. We can do that only if we focus on what people need to do. If instead we let knowledge-based objectives distract us, we’ll create the usual information dump followed by a quiz, which is the approach that helps make us irrelevant.
If you’re using action mapping, your client helped create the list of actions, so they’re already familiar with them. If you need to submit a formal document, I recommend an outline rather than a big design-everything-at-once document. (See this big interactive graphic of the action mapping workflow.)
In that outline, you can include your action map, which shows the actions and the information required by each. The actions are your main objectives, and the bits of information represent the knowledge that supports those objectives.
If your client wants to see conventional learning objectives, consider listing your actions as “performance objectives.” Then, indented and subordinate to each performance objective, list its enabling objectives.
I resist writing the enabling objectives using test language (“describe, explain, define…”) because that sets the expectation that there will be a knowledge test. Maybe some of the knowledge doesn’t need to be memorized and could instead be included in a job reference. It won’t be tested, so there’s no reason to write a test-style objective about it.
Or maybe people do need to memorize some stuff, but a separate knowledge test would be artificial. Instead, you could assess with the same type of activities you provided for practice, which would test not only whether people know the information but whether they can apply it.
Briefly tell people what they’ll be able to do as a result of the activity, and focus on what they care about. Put those over-eager learning objectives on mute because they don’t know how to sound appealing.
Again, I’m not talking just about courses. This applies to activities, which could (and maybe should) be provided individually, on demand. Each activity that stands alone should quickly make clear what people will practice doing and how they’ll benefit.
For more on the distinction between an action and an enabling objective, see Why you want to focus on actions, not learning objectives.
By Cathy Moore
“We’re introducing something new,” your client says. “So of course everyone needs to be trained on it.”
Hmmm. Really?
Maybe your client is thinking this: “This new thing is so bizarrely new that no adult Earthling could possibly figure it out without formal training.”
Or maybe they’re really thinking this: “This new thing is a pain in my neck and I don’t know how to introduce it. I’ll have L&D train everyone and call it a day.”
Either way, the client is expecting you to unleash an avalanche of “training” on innocent people who would rather just do their jobs.
“Please train everyone on the new TPS software by June 1,” your client says.
The client expects to hear, “Sure. I’m on it!” Instead, offer an innocent “why?”
“Why are you installing new TPS software?” you ask.
“Because people were messing up their reports in the old software,” your client says.
“Why were people messing up their reports in the old software?”
“It was confusing to use,” your client says. “The new software walks people through the process a lot more clearly.”
“So the new software is easier to use?”
“Yeah, a lot easier.”
“And everyone who will be using it is already familiar with the old software?”
“Yep. They’ve all been entering TPS reports for years.”
At this point, do you agree with the client that everyone needs “training” on the new software? I hope not.
You might propose this: Give the new software to a few typical TPS report creators and watch them figure it out. Their struggles (or lack of struggle) will show what support they really need. A help screen or short reference is likely to be enough “training” in this case.
If you’re using action mapping, you’ll want your client to give you a measurable business goal that justifies the expense of the project.
In our example, the client’s first goal was, “TPS software training is 100% complete by June 1.” This goal is measurable, but it doesn’t show how the organization will benefit. It also gets way ahead of itself by assuming that training is the solution.
Your innocent questions help the client see their real goal. This might be, “TPS error rates decrease 35% by June 1 as all TPS staff correctly use the new software.”
This goal doesn’t assume that training is the answer, and it justifies the expense of the project in terms the organization cares about. It also leaves room for many solutions, including job aids.
“We’re releasing a new product,” your client says. “Please train all employees on it.”
What are the two biggest problems with this request? I’d say:
1. The client assumes training is necessary.
2. They think “everyone” needs training. They’re planning a sheep dip.
Your (polite! helpful!) questions should steer the client to this:
Then, if some training does seem to be necessary, it will be far more targeted and useful.
You could use a similar approach for customer training for a new product:
What to do if they just want “awareness”
How to design software training, part 1: Do everything except “train”
Is training really the answer? Ask the flowchart.
By Cathy Moore
“How can we make mandatory training more than a tick box exercise?”
That’s the top topic voted by blog readers, so here’s my take.
For “mandatory training,” I’m picturing any material that says some version of “Follow these rules.”
It’s sheep-dip training. Everyone must be “exposed” to it, and a checkmark records that they have been exposed.
How can we make it more relevant?
A client who says “Everyone must be trained on X” needs our resistance, not our obedience.
Help the client by asking questions, such as:
If there’s really no problem, we shouldn’t create a solution. We need to focus on improving performance, not guarding against problems that experience has shown aren’t likely to occur.
If it’s clear there really is a need for “training,” or some force far outside your control insists on “training,” then put on your action mapping hat and push for a measurable goal. Here’s one model to follow.
For details, see How to create a training goal in 2 quick steps.
Make sure your audience is specific. “All employees” is not specific.
If you’re required by forces beyond your control to create something for all employees, you can at least break down the audience by major job roles as described next.
Focus on one job role in your audience. Ask your client and SME what these people need to do, in specific, observable terms, to meet the goal.
“Follow the data security policy” isn’t specific. This is specific:
Prioritize the actions. Choose a high-priority one, and ask, “What makes this one thing hard to do?” Use the flowchart.
Again, you’re doing this for a specific group of people in a specific job, and you’re focusing on specific, observable behaviors. You’re not asking this once for the entire “course,” and you’re not talking about all employees in every job everywhere.
If those forces far beyond your control insist on applying the same solution to everyone, do this analysis for the major job roles. You probably won’t have a ton of time to do this, but even two hours can save you and everyone else from a much bigger waste of time in the form of irrelevant and ignored materials.
Then, if training is part of the solution, you can have people use only the activities that apply to their job.
If you skip this analysis, what do you have to work with? Generic rules that are guaranteed to become an information dump.
Instead, if you look closely at what people need to do and why they aren’t doing it, you get:
Yes, people need to know stuff. But they need to know stuff in order to do stuff. Design first for what they need to do.
Provide the need-to-know information in the format it’s used on the job. Let people pull the information just like they will on the job.
Here’s a fictional example. Extraterrestrials have landed and are being incorporated into earthling families. As a result, employers have created alien leave policies. Here’s a mini-scenario for managers.
To answer this question, what information does the manager need? The alien leave policy. How should we provide it?
The traditional approach would be to first present a bunch of slides about the policy. Then we’d give people a chance to “apply” what they’ve “learned” by having them use their short-term memory to answer the question.
But why design slides to present information that’s already in a policy on the intranet?
Instead, we can plunge people into the activity and let them use the policy just like they will on the job.
And now that we aren’t developing lots of information slides, we can create more activities. Since they aren’t trapped inside an information presentation, they can travel alone. For example, we can provide them individually over time (spaced practice) as described in this post.
Create a prototype of one typical activity and show it to the stakeholders. Make clear that people will see only the activities that apply to their job. They’ll pull information rather than recognizing what they saw three slides ago, and they’ll learn from the consequences of their choices.
You’re letting the stakeholders see for themselves how you plan to provide the “training,” because then you’ll be in a good position to respond to the following common concerns.
Give each option unique feedback. In that feedback, first show the consequence of the choice — continue the story.
Then show the snippet of information they should have looked at, as described in How to really involve learners. Do this for all consequences, not just the poor ones.
See more ideas and examples in Scenario mistakes to avoid: Eager-beaver feedback.
If you have a stakeholder who’s determined to expose everyone, you can point out that they are now exposed. They’re just exposed after making a relevant decision, rather than in a forgettable presentation.
By not presenting information first, you’re helping people see their own knowledge gaps. They’re not pulling stuff out of short-term memory, because you haven’t put anything there. They have to rummage around in their existing knowledge, look at the policy just like they would in real life, make a choice, and learn from the consequences. They get deeper learning, plus they’re dutifully “exposed” to the correct information.
Which approach is more likely to avoid lawsuits about misuse of the alien leave policy?
A. Present the policy over several slides. Then require a knowledge test to see if people can recognize a bit of information that they saw 5 minutes ago. If they can, they “pass.” If they can’t, they must put those same slides back in their short-term memory and try again.
B. Present challenges in which people need to make the same decisions they make on the job. Provide the information in the same format that people will have it on the job. Start with easy-ish decisions and increase the challenge. If people make good decisions in enough activities, they’re free to go. If they make not-good decisions, they get more activities and optional help until they make good decisions.
Don’t design for “They should know the rules.” Design for “They should correctly apply the rules on the job.”
For lots more, see my book and just about everything in this blog, especially the following posts.
Credits
Photo of Jorge: David Boyle in DC via Compfight cc
All other images: Cathy Moore
By Cathy Moore
I talk a lot about using Twine for branching scenarios, but it’s also useful for creating interactive job aids. Here are two examples.
Want to help people diagnose a problem or identify the best person to contact? Be inspired by this fun example created by Krishan Coupland in Twine: A Primer on the Capture and Identification of the Little Folk of Myth and Legend.
This is basically a text-based flowchart, sending you down paths depending on the characteristics of the creature you’re trying to identify.
This type of interaction has a lot of potential uses in business, where the little folk might be replaced by types of tools, people to contact, troubleshooting steps to follow, or any other type of flowchart-y decision.
I’d prefer a quicker, visual flowchart when possible, but this text-based approach lets you include more detail.
You can make Twine handle more complex decisions if you use variables.
One of the most common questions I get is, “Will action mapping work for my project?”
To relieve the pressure on my in box, I created a Twine interaction that answers that question. I used variables to keep track of answers.
For example, you might say that your client is a non-profit organization ($org is “nonprofit”) and their goal is to make people feel confident or engaged ($goal is “feelings”). As a result, you’ll see advice tailored for non-profit organizations and feelings goals.
You might want to try the interaction before reading more, or the rest of the post won’t make much sense.
The variables are set when you answer questions at each decision point. Here’s how one decision point looks.
Earlier, the user identified whether their client was external, internal, or themselves. Now they’re being asked what type of organization the client works for.
If they answer “Business,” a variable that tracks the action mapping score ($am) gets 2 more points. This score will help decide whether action mapping is appropriate.
If they answer “Non-profit or government,” the action mapping score increases by only 1, due to the goal-setting issues that often plague those types of organizations.
If they answer “University or school,” the action mapping score doesn’t increase, because it’s likely that action mapping won’t be appropriate. That will get decided in the next question, which asks who the audience is.
The questions continue assigning variables and changing the action-mapping score. Users who want to prepare people for a knowledge test will be told early on that action mapping won’t be appropriate. Others will continue until they see the final advice screen.
The final screen uses the variables that have been accumulating to display text unique to each variable. Here’s a snippet.
In the above excerpt, if you said that your goal was to have people do something differently, you get a confirmation that action mapping will help. However, if you said that your goal was for people to be aware of something, you get some advice on how to change that so you can use the model more successfully.
Several additional paragraphs of text appear on the advice screen, all based on the answers you gave earlier and the variables you were assigned as a result.
This took some time to develop, but it has also saved a lot of time by reducing the number of questions I get. This kind of tool can reduce the need for training and relieve the pressure on help desks by providing instant answers tailored to each situation.
By Cathy Moore
Looking for inspiration for your scenario-based training? Here are some ideas from the world of fiction.
Branching scenarios often represent decisions that take place in a complex world.
For example, let's say your scenario describes a manager, Sarah, who has to decide what to do about a long-term employee whose performance is suddenly slipping. In the real world, Sarah would have a long history with the employee that would influence her decision. That's the backstory.
It can be hard to cram a lot of backstory into an online scenario. One way to do it is with links that provide snippets of history, as I describe in chapter 10 of my book. It's common to do that in Twine scenarios.
Arcane Intern (Unpaid) by Astrid Dalmady provides two levels of this additional information.
For example, you can click to look inside "your" bag. Once you're in the bag, you can click more links for another layer of information.
From those additional links, you can navigate one step back to the bag or all the way back to the waiting room.
The story also appears to require you to click the backstory links. In training-land, a stakeholder could argue that this sort of control is necessary to make sure players have the information they need later to make a good decision.
I'm not convinced, especially since I rant regularly that we should let learners decide how much information they need. If we let players decide how much information to gather, they're practicing a skill they need on the job: They need to recognize what information they need to make a good decision, and go get it.
In most Twine games, when you click a link for more information, you go to another screen. But in Harmonia by Liza Daly, the extra information appears in the margins.
This approach could help reduce cognitive load -- the player doesn't have to juggle the information in short-term memory while also making decisions.
This prototype of a comic shows one story that you can change by clicking a different decision in an early panel.
Choose a different fruit in the second panel, and the rest of the comic changes to show the consequence of your choice.
You could create a text version of this. For example, you could display a short story of an event at work that has one clickable decision early on. The default text that displays shows the result of one branch of the decision. The player can read that, and then click the decision to see how the story changes.
This could be a way to satisfy a stakeholder who says, "Make sure they see the consequence of Common Mistake X!" I'd prefer to let players make decisions from the beginning like the grownups they are, but this is a compromise you might need someday.
By Cathy Moore
You want people to practice making decisions in a situation that has grey areas -- that's perfect territory for an elearning scenario. But what type of scenario do you need?
Will a one-scene mini-scenario be enough, or do you need to invest the (considerable!) time in creating a branching scenario?
Here are some ways to figure that out.
A mini-scenario is just one question. It gives you a realistic challenge, you make your choice, you see the realistic consequence, the end. The consequence might be a fast-forward peek into the future, but you make a decision in just one scene.
The following is a bare-bones mini-scenario. Ignore the fact that I obviously made up the options. Look at what the scene is requiring you to do and what type of feedback you get.
Bill needs to pass a scalpel to Sara during surgery. What should he do?
a) Put the scalpel in a sterile kidney dish and hold the dish out to Sara.
b) Hold it by the neck and place the handle in Sara's palm.
c) Put it on the surgical drape for her to pick up herself.
d) Toss it gently in her direction, handle first.Feedback for b: Sara is distracted by a loose clamp and moves her hand just as Bill places the handle in her palm. The scalpel cuts Bill's thumb.
The feedback shows the consequences. It doesn't say, "Incorrect. You should never..."
Other people use "mini-scenario" to mean different things. For me, “mini-scenario” doesn’t mean an activity that forces people to go back and do it right, an easy activity, or something that happens only within a limited timeframe. The choice you make in a mini-scenario could have consequences that appear years later, but it's still a mini-scenario by my definition because it's just one decision.
Another way to look at it: You can make a mini-scenario using any multiple-choice question tool that lets you provide unique feedback for each option. It can be long or short, but it's just one decision, so it's a mini-scenario in my world.
Mini-scenarios are useful when...
The consequence of the player's choice could happen immediately or in the future.
In the example with Andreas, if you choose the right plan, the feedback could say that five months later, Andreas gets hit by a bus in Zambia but is treated at no cost thanks to you having sold him the correct plan.
If you choose the wrong one, poor Andreas has to limp to an ATM and withdraw large amounts of cash.
So even though the consequence happens in the future, this is a mini-scenario because just one decision was required.
A series of mini-scenarios can be strung together to create what feels like a story, but the consequence of one decision doesn't determine how the next decision is made.
A typical example is a "day in the life" story of disconnected decisions. For example, we play the role of a security guard who has to recognize and resolve unrelated issues during the day. Our decision about the tripping hazard at 10 AM doesn't affect what we do about the unlocked door at 1 PM.
Players make just one decision, but that decision can be difficult. See the previous post on using mini-scenarios to practice recovering from mistakes to see an example.
A branching scenario contains multiple questions ("decision points"). The consequence of one decision affects the next decision.
Two people going through the same branching scenario could see different questions and story lines.
Try these examples of branching scenarios if you aren't already familiar with the format.
Branching scenarios are useful when a decision made at one point determines the input for a decision made later. A classic example is a tricky conversation in which you ask the wrong question, limiting the information you have to work with in a later part of the discussion.
They help people practice:
A common mistake is to assume you need a branching scenario if you want people to practice a longish process. However, you need branching only if:
If you decide that you don’t really need branching, you might consider offering several mini-scenarios that focus just on the tricky step in the process, as described in Mini-scenarios: How to help people recover from mistakes.
A third type of scenario might look at first like a branching scenario, but there’s just one path. Two people going through the activity will see the same questions because there’s only one way through the story.
To progress in the story, you have to answer “correctly.” If you choose a “wrong” answer, you’re forced to go back and try again until you get it right. Then the story continues.
I call this a “control-freak scenario,” because that’s how it feels to the player. You’re not allowed to experiment. You’re punished for bad choices by being made to go back and try again until you get it right.
Learn more about control-freak scenarios and when (rarely!) they might be useful.
My book Map it digs deep into how to design these types of activities.
By Cathy Moore
In a previous post, we looked at some ways to help people learn from their mistakes in branching scenarios. How can we do the same thing in the much more limited world of the mini-scenario?
A mini-scenario is a one-scene story in which the player makes a choice, sees the consequence, and that’s it. The consequence could be a fast-forward peek into the future, but the player makes a decision in just one scene.
Mini-scenarios are far easier to write than branching scenarios, but they can be limited.
Let’s look at ways to break out of those limits and help people practice recognizing and recovering from mistakes.
Let’s say that I’m a salesperson who needs to learn how to manage sales conversations with Martians. This involves some cross-cultural fancy stepping. To help me practice recognizing and recovering from mistakes, you could write a mini-scenario like the following.
You’re going to coach Bob through a sales conversation with a Martian. He’s wearing a mic so you can hear the conversation and has an earpiece to hear your suggestions.
He has arranged to meet his prospect, Jrod, at Starbucks. You’re already in a nearby booth, pretending to work on a laptop.
When Bob arrives, Jrod is sipping a frothy drink that’s topped with whipped cream and sliced ham.
“That looks delicious,” Bob says. “It would never occur to me to add ham.”
Jrod looks steadily at Bob without saying anything.
What do you say to Bob?
A. “Keep praising her good taste. Martians like to feel superior to humans.”
B. “Quick, change the topic to the solar storm we’re having. You’ve gotten too personal.”
C. “Don’t freak out. Martians are quiet at first. Tell her something about yourself.”
D. “Stop making small talk. Martians want to talk business immediately.”
This sounds like the beginning of a branching scenario, but it’s a mini that focuses on one specific skill — starting out right with a Martian prospect. The decision happens in one scene, and the consequence ends it.
Maybe if I choose D, “stop making small talk,” I get the following consequence.
“So,” Bob says heartily. “I understand you need some widgets.”
“Is there someone else I can talk to?” Jrod says. “Is there a Martian on your staff?”
Explore other options.
The story has ended, because we’re just practicing the opening of the conversation.
Obviously, it’s not a happy ending, and I don’t need you to tell me that, so you don’t. You’ve just shown me the unhappy consequence, and now you let me go back so I can learn a better approach.
I click “Explore other options” and go back to the original question. Maybe this time I look at the optional help you’ve provided and which I ignored the first time.
Now I choose A, telling Bob to keep praising her good taste. Here’s the consequence.
“I’ve always admired the Martian ability to combine flavors,” Bob says. “And colors! No human would think of wearing a purple cape with green shorts as you have.”
Jrod sips her drink. “I need 5,000 megawidgets by Friday,” she says. “Would this be possible?”
Explore other options.
It looks like I’ve chosen well and the conversation is now going in a good direction. If you want to make it abundantly clear, you could add a paragraph that fast-forwards the story to describe how Jrod ends up buying 10,000 megawidgets.
However, even though I’ve gotten a good ending, I still have the chance to “explore other options.” I could go back and try other things, seeing the consequences of other options, learning more about cross-planetary communications.
What did you just do?
You used a one-scene mini-scenario to help me practice one isolated skill: How to start on the right foot with a Martian prospect.
You combined error recognition and recovery in one scene. I had to recognize what error, if any, Bob has made, and tell him how to fix it. In our example, Bob actually started out well, so I also had to practice recognizing and continuing the best behavior.
Above, you had me recognize whether someone else has made a mistake. You could increase the pressure by having “me” make the mistake, but you also risk ticking me off. You might have me make a mistake that I’m sure I wouldn’t make in the real world.
Here’s how it could look.
You’ve just been called by Hofdup, the widget decision-maker for a Martian company.
“We may need a large number of your widgets,” Hofdup says. “However, you would have to reduce the price 85%.”
“May I ask how many widgets you’re interested in?” you say.
“No, you may not,” Hofdup says, sounding offended.
What do you say now?
A. “To determine if the discount is possible, I need to know the number of widgets.”
B. “I’m sorry, I realize that you know best. I would be happy to discuss this with you in your office.”
C. “I meant to say that…”
etc.
I might already know that you don’t question a Martian early in the conversation, but you had me make that rookie mistake. And now you want me to clean up after a mistake I like to think I would never have made.
So use “you” with caution, only when you’re sure that your players would make the mistake themselves. Otherwise, it’s probably safer to have someone else screw up.
I’m a scenario purist. I think an activity earns the label “scenario” only if the player is making decisions and seeing realistic consequences.
You or your stakeholders might not be quite so pure. As a result, you might write something like this:
Martha has completed the TPS form for the Acme project. See the form.
What will happen if she submits this form?
A. The client will be charged too much.
B. The form will go to another analyst instead of the customer satisfaction rep.
C. She will have to create a second TPS form because she left out a widgetification step.
D. The project will start as planned and the customer will be notified.
The above options only let me demonstrate my ability to recognize errors. They don’t let me resolve those errors. I don’t take any actions like I would in real life.
I’ll also get finger-wagging feedback. If I choose C, I’ll get something like, “Incorrect. Martha has included all the widgetification steps, but she’s charging the client too much. She has added a…”
Instead, combine error recognition and resolution as you did in the first two mini-scenarios, maybe like this:
Martha has completed the TPS form for the Acme project. See the form.
What should she do now?
A. Remove the unnecessary dewidgeting fee from the client total.
B. Change the address so the form will go to the customer satisfaction rep.
C. Add another widgetification step.
D. Submit the form so the project starts on time.
You’re still testing the same knowledge, which in learning objective land might be “Recognize common errors in TPS reports.”
But you’re also testing my ability to fix those errors, which is what we really want. “Recognize common errors” is an enabling objective for what really makes a difference on the job, which is “Submit error-free TPS reports.”
In the rewrite, you’ve combined error recognition with error fixing. This time, I get “showing” feedback that continues the story. Instead of wagging a finger at me, you let me draw conclusions like the adult I am.
For example, if I have Martha add a widgetification step, I find out that the project took longer than Sales promised due to excessive widgetification and the client demanded a partial refund as a result.
As we saw in the previous post, a branching scenario can help you provide more realistic practice in recognizing and recovering from errors. Rather than focusing on just one step of a longer process, you can help people practice recovering from early mistakes at several points in the process, with different consequences.
However, there are other reasons to choose one type of scenario over another. In the next post, we’ll look more closely at how to decide between a mini-scenario and a branching one. Sign up for blog updates to make sure you don’t miss a post.
Photo credit: Matt From London Flickr via Compfight cc