Saturday, 26 May 2012

Did you have a satisfactory course?

I have written and delivered training courses for many years and in all that time there has always been some form of evaluation at the end of the course. Although there are several questions on the current QA evaluation form, the most important is Overall Satisfaction.

What is the evaluation form for? 


The design of evaluation forms has varied widely over time and they have asked respondents to rate many aspects of the course including pre course administration, the venue and its facilities, catering, training materials and the trainer.

These forms were often referred to as 'Happy Sheets' which tells you I suppose what we thought they were for. i.e. if the delegate scored relatively highly then they must have been happy and therefore it must have been a good course.

Not really. I have known delegates who have had a great time, had a lot of fun and made some friends but in terms of learning felt that the course wasn't the right level for them.

Never mind if their employer (the client) was happy.

Why is that?

Like many trainers, I would check out the scores at the end of a course and look at the trainer scores (i.e. mine) first. How did I do? What did they think of me?

Only natural right?

I would of course check out the other scores such as facilities, administration and materials to see how they fared. I would probably tut if there had been a problem in one of those areas. In fact in a selfish way it was a reinforcement of my score if everything else was lower. e.g. if the trainer's score was higher than the course structure and materials.

As long as my score was good, I could hold my head up.

An interesting debate recurred over the years to do with scoring. It had to do with not using an odd number to score a question. e.g. 1 - 5. If you scored using an odd maximum score then you were enabling the respondent to score an average. e.g. 3 out of 5.

So I have seen question score lines of 1- 6 and 1 - 10 on forms but very few 1 - 5 etc.

I understand the logic but those who came up with the theory missed an important point. If the respondent scored anything other than the top two values then they were not happy at all.

At QA, each of our questions is scored out of 9. So for me, anything other than an 8 or a 9 is poor.

Over the years I have come to realise that although I want my contribution to be appreciated and valued, it is the combination of factors that really matters.



Are delegates satisfied? 


Although the scores in each category is important on a QA evaluation form, the single most important thing that QA have done is to focus on another question entirely.

Overall Satisfaction

This has four possible answers and therefore a swing of 1 has a great impact.

Possible answers are:

1. Very Satisfied

2. Somewhat Satisfied

3. Somewhat Dissatisfied

4. Very Dissatisfied

 

Someone who is Very Satisfied is highly likely to come back to and or recommend QA.

It stands to reason that either Somewhat Dissatisfied or Very Dissatisfied are totally unacceptable a as the delegate is clearly not happy with some aspect(s) of the course/QA experience.

Someone who is Somewhat Satisfied is saying it's OK.

I had an Antipasto Misto in Swindon the other night. It was OK. I'll not be walking 600 yards to the restaurant again this evening.

On the other hand, I had a meal in a restaurant (uniontavernlondon.com) once that I enjoyed so much (food and service) that I now make a point of staying in the hotel across the road even though it means a 3 mile walk (I don't do The Underground - full of sweaty/smelly/sneezy people) each way to our centres at Middlesex Street, King William Street or Tabernacle Street.

I am committed to doing everything that I can to ensure that all delegates are happy to say they are Very Satisfied. Not only by making sure that I deliver the best course that I can and hopefully attaining a high personal trainer score but by also focusing on those areas that are in theory "not my responsibility".

Even though I take a strong interest in my trainer scores, I also look at the other scores. particularly if I feel that they contributed to a Somewhat Satisfied score.

It gives me the motivation to approach colleagues with an influence on other scores to see if there is anything I can do to help them address the problem.

This includes discussing course pre-requisites with account managers or in the case of courseware, seeking out the author to report a typo or misleading step in an exercise. If it's externally sourced material, this might involve me compiling a list of gotchas in the material and making sure that the delegates have a copy to save any problems whilst working on them.

If it's a problem with the room, I will be aware of it on day one. e.g. faulty air conditioning, chair or window blind etc. So I can start dealing with it straight away rather than kicking it into the long grass.

After every course, I look at the evaluations and try and identify what we did right and what we could have done better.

 

How do you deliver a great course? 


Having said all that about all aspects being important, it is only natural that I constantly strive to not only improve my scores and keep them high, but that I also try to find areas of the course in terms of design and delivery that I can improve to help contribute to a high level of satisfaction.

With that in mind, I have decided to write a series of posts/articles that deal with ideas, tips and best practices. i.e. Those that work for me and get me results.

If you are a trainer, perhaps you will find them useful.

If you are not a trainer, well I hope you will find the insight interesting.



Watch this space.



I'll be back just as soon as I come up with a catchy title for the series.




See you soon

Phil Stirpé
"I don't do average!"



 

No comments:

Post a Comment