Monday, February 16, 2009

Reality or ... Media?

My January 24 post, "Collapse of a Community of Practice", included an aside about what training practitioners are really doing v. what the media -- print, business blogs, "forums" and "webinars" would have us believe (another aside: there is nothing positive about the word "webinar"). My third book, From Analysis to Evaluation, was envisioned as a compilation of tools developed and used by practitioners in the field, loosely arranged around the ADDIE model of instructional design. Dozens of authors and training practitioners were invited to contribute to it, and were specifically asked for tools they were using in their own work. I thought I had a pretty good idea of what to expect in terms of submissions, and was somewhat surprised at what did not arrive. For instance, no one -- not one -- person submitted anything on determining training results-on-investment (ROI).

As this is such a hot topic in training-related magazines and books, I don’t know whether the lack of submissions is coincidental, that no one ever needed to create a “homegrown” tool for this, or that it’s a reflection on what is really happening in the field in spite of what the literature tells us. As I knew readers would expect to find it, I went back and added some material where reviewers felt its absence would be especially noticed, but let me say again: I asked people to share what they used.

Last week I tipped sacred training cows. This week I'm asking something different. What do you find that you really use in your practice, and does it differ from what media and myth say you should?

4 comments:

Anonymous said...

One tool I use all the time is the evaluation strategy template form St Francis Xavier University adult Ed program. It's based on Kirkpatricks model. I use it right away, it's easy for the client to understand and it helps me help them think about and articulate what they really want to achieve with the learning they want me to design for them.
Rob Bartlett

Unknown said...

I wonder if this question goes far enough? I absolutely believe their is a discrepancy between what IDs/trainers "should" be doing and what they are actually doing...but isn't this more of a failure of leadership on our part than a media myth of what our profession is or should be? You mention that you received no responses pertaining to ROI, but did you receive many on needs assessment either? I would guess that most people DO fall into the order taker category, and that this necessarily leaves off the initial needs assessment and benchmarking, making it much more difficult to truly measure ROI or even ROE. Thanks for asking these questions, Jane...we need to keep these conversations active to improve our professional practice. I wrote a similar assessment in Jan in my blog: http://learningintandem.blogspot.com/2009/01/instructional-design-is-dead.html

Jane Bozarth said...

Koreen, yes, I received piles of tools related to needs assesment. (The post links to the book; you can see the table of contents at Amazon.) I was not able to use all the tools submitted, in fact.

But I was not implying that we should be measuring for "ROI" (see my earlier post on alternatives to the Kirkpatrick taxonomy), I was saying that the literature would have us think that this happens all the time, when my experience suggests that it is not happening at all. (Kirkpatrick himself, by the way, says there is no such thing as "ROI'.) I don't see this as a disconnect between reality and should, but between reality and, for lack of a better word, hype.

Anonymous said...

Jane, I take this as a sign, and take heart, that trainers recognize the snake oil of "ROI". Consider doing a study on how much the hype ties to a flavor-of-the-month "initiative" in the industry.