The SPICE Ecosystem: Evaluation and Impact

The SPICE Ecosystem: Evaluation and Impact

The SPICE Framework emerged from research related to people, organisations and decision-making. A whole ecosystem of tools has developed around the SPICE Framework that can be applied face-to-face or in the digital environment of the ‘metaverse’.

I’ve worked in people development in one form or another for my whole career, which is more or less exactly forty years.  I recognise that’s a long time!  I’ve been round the block a few times and seen approaches come and go.

For all my long career, I’ve been directly involved in evaluation and impact assessment.  You wouldn’t know it to look at me, but qualitative evaluation is a bit of a passion of mine.  In practise this means, I always want to know what’s worked and been effective and why.

If you are reading this, sharing my commitment to effective qualitative evaluation, perhaps you share my frustration when this is hardly ever done.

My own background in qualitative evaluation, extends beyond learning and development, the principle area of concern within the SPICE Ecosystem™.

As it happens, I’m trained and practised in qualitative research methods that extend well beyond learning and development, in particular through action learning, action research and appreciative enquiry. I’ve deployed such methods in consultancy projects in the UK, the USA and in the Netherlands, with businesses of all kinds and across the Public, Private and Third sectors as well as in the domain of Learning and Development.

Therefore, readers will understand I am frustrated when business and learning and development colleagues typically minimise evaluation and often reduce it to just the proverbial ‘happy sheet’.

A very early experience of such methods was in writing a dissertation on learning evaluation back in 1984.  Back then, I remember learning the distinction between evaluation and validation in learning and development programmes.  I learned how using evaluation to validate a learning and development programme can be very tempting, without ever evidencing whether or not it met the participant’s or their employing organisation’s development needs.

My dissertation was partly a reflection on two post-graduate professional programmes I’d been contracted to evaluate in a consultancy role.  In that role, I considered the psychological contract between the participants who were funded to be trained by their employers and the two universities delivering the training programmes.  I remember arguing, there was a vested interest amongst student and university staff to use the evaluation process to validate – to justify – the programme.

Coming right up to date, recently I read a report on a senior leadership development programme for a large national organisation in the UK.  The report’s authors proudly asserted that they had used what they described as the ‘Kirkpatrick Model’ to evaluate their Senior Leadership Development Programme.  Of course, this sounds great, and the report looked impressive.

As readers may be aware, this ‘Kirkpatrick Model’ is taught on many training programmes for Learning and Development professionals as a standard approach to evaluation and impact assessment.  It’s a great model that has been around since the 1950s.  For those unfamiliar, it outlines four stages or components to evaluation:

  1. Participant Reaction, mainly to the learning experience.
  2. Testing out the skills and knowledge participants have acquired.
  3. Evaluating the participant’s own perceived behaviour change as a result of the programme.
  4. Quantifying and describing the result of participating in the training.

Whatever the strengths of the Kirkpatrick Model (and there are many), its delivery requires a comprehensive commitment to travel from stage 1 to stage 4.

So back to example of the report I read which asserted to it’s intended audience to have used the ‘Kirkpatrick Model’ to evaluate their Senior Leadership Development Programme.  In fact, I think that at best their evaluation had been at stage one and two and possibly edged into stage three, in that they might have gained indications of perceptions of the need to change behaviour amongst participants.  However, I could see no way that the evaluation could have extended to include a perception/report of how behaviours had changed nor how the programme might be attributed with having resulted in a quantifiable change in performance.  For Kirkpatrick, or any similar model requires a commitment to a cycle of ongoing evaluation that connects with the measurement of participant’s performance more than their reaction to a programme.

In fact, a friend of mine who runs one of the largest Learning and Development business’ in the UK (‘no names, no pack’ drill as they say) tells me that invariably he is asked to quote for a full evaluation when tendering for programmes.  Yet on only two occasions in many years and hundreds of contracts has he actually been commissioned to deliver such an evaluation.  This is on grounds of time and cost (typically 12-15% of project delivery budget).  To me, this makes no sense.  To me it is a nonsense to expend a large amount of resource on developing people without evaluating its effect and measuring its impact.

Therefore, let me share with you an approach to evaluation and impact assessment we have applied within the SPICE Ecosysystem™ In this, I want to pay tribute to John Mattox and Mark van Burren and their excellent book, Learning Analytics (published by Kogan Page in 2016).

Derived from Mattox and van Burren, within the SPICE Ecosystem™ we have incorporated tools to enable ongoing and consistent evaluation and impact assessment from Stage 1 to Stage 4 of Kirkpatrick’s model, deploying these five characteristics of effective evaluation:

  • Establish an evidence-base (in other words, be clear about expected outcomes/impacts).
  • Gather evidence as close to the point of delivery as possible.
  • Engage and involve participants in the evaluation process.
  • Provide feedback to participants.
  • Ensure evaluation is ongoing.

Within the SPICE Ecosystem™ we’ve incorporated really simple questioning tools allowing businesses the opportunity to evaluate effect, behaviour change and to measure impact from development.  For minimal cost, these questioning tools can be used by businesses of any size and complexity not just to evaluate and measure impact but also to compare business areas and to track change.

For example, if you are running a small business, it is possible to use the SPICE questioning tools to keep track of the performance and development of your team.  If you are running a large, complex multi-site business, you can, not only keep track of performance and development overall, but also compare and track development of different teams, locations or whatever.  In so doing, you can gather important data impacting on business performance.  And this is integral to the SPICE Ecosystem™, enabling you to embed evaluation and impact assessment from stage one to stage four of the Kirkpatrick model.

At the heart of our experience is a method we can deploy, to support any business develop and innovate through its people for, at our core is an consultancy. That supports innovation through people with a tried and tested approach that is proved in our own work.  As such, Innovation People is able to support you and your business to develop people and increase levels of sustainable innovation.  Get in touch for more information through our strategic development product at [email protected] or visit, www.spiceframework.com

Written by Michael Croft

July 27, 2023

Previous
Next
Previous
Next