Golden Doodles and Most Significant Change Theory
I've returned to writing this week after a short break. This hiatus in blogging is largely due to the 'puppy needs' of the most significant change in my life: our golden doodle, called Raven.
Never having had dogs before, I didn't know what to expect. We planned for Raven to be a therapy dog for my eldest, who is struggling with her health. She has certainly made a difference in this respect, and will only get better as she is trained. However, as anyone with a dog will know, adding a dog to a family home changes everything, and in ways you can never fully predict or appreciate in advance. If you were to ask each family member, "what is the most significant change" that Raven has made, we would all give different answers, and crucially, our answers would differ from those we might have predicted two months ago. This leads me to an absolutely not clunky segue into the topic of this post: Most Significant Change Theory (MSC).
In previous posts, I have been documenting my learnings around Theory of Change. In particular, I wrote about
Key Performance Indicators (KPIs), and I've been thinking about their compatibility with the
complexity of many educational interventions.
If we expect some unintended consequences to complex interventions, how can we possibly plan KPIs in advance?
We can think of Most Significant Change theory as 'monitoring without indicators'. This research method is largely credited to Rick Davies and Jess Dart: their bible of MSC is well worth a read:
Most Significant Change Technique and a Guide to Its Use . We learn from Davies and Dart that, in a nutshell,
"The most significant change technique is a form of participatory monitoring and evaluation. It is participatory because many project stakeholders are involved both in deciding the sorts of change to be recorded and in analysing the data. It is a form of monitoring because it occurs throughout the program cycle and provides information to help people manage the program. It contributes to evaluation because it provides data on impact and outcomes that can be used to help assess the performance of the program as a whole."
In a classic MSC process, focus groups would be asked for 'the most significant change' they noticed as a result of an intervention or activity. Once this data is collected, the implementation team will discuss the results and pull out common threads and themes: in turn these threads inform the next iteration of the activity. Hence, there is not only an in-built feedback loop, but also a means of acknowledging and dealing with complexity. The method is succinctly explained here:
"If we decided to use the MSC methodology, the starting question would not be “how many people have learned to read by the end of the project” but rather, “how have the lives of those who have learned to read changed”. In other words, “what has been the most significant change in that person’s life after learning to read”?"
There are several practical guides to be found online, not least this useful article, in which we learn that MSC is most useful where:
- it is not possible to predict in any detail or with any certainty what the outcome will be
- where outcomes will vary widely across beneficiaries;
- where there may not yet be agreements between stakeholders on what outcomes are the most important;
- where interventions are expected to be highly participatory, including any forms of monitoring and evaluation of the results
Furthermore, Davies states,
"The types of programs that are not adequately catered for by orthodox approaches and can gain considerable value from MSC include programs that are:
- complex and produce diverse and emergent outcomes
- large with numerous organisational layers
- focused on social change
- participatory in ethos
- designed with repeated contact between field staff and participants
- struggling with conventional monitoring systems
- highly customised services to a small number of beneficiaries."
All of this screams complexity and schools to me.
For those of us involved in large-scale educational interventions in schools, MSC seems highly appropriate to our daily work. We shouldn't be surprised when the change expected in the planning of the activity might not be the most significant change that occurs in practice. For example, we run a weekly Swim Confidence Club, where groups of students from local schools gain support with their swimming, which they may have had little chance to develop during and since the pandemic. While we do indeed collect evidence of progress via pre- and post-assessments, often the most significant changes are captured anecdotally: an increase in confidence, conquering of a fear of water, making friends with children they wouldn't normally mix with... Arguably, all of these outcomes are just as important as moving through the swimming levels.
As a result of capturing these voices, next year's project may involve some capture of this change via what Christina Astin calls 'coconut data': soft and fuzzy on the outside, but hard on the inside. Think: graded scales of response to more subjective questions around confidence, self esteem, mental health etc. The compelling idea is that future iterations of the project will capture more of the 'change' and we can also plan for more explicit opportunities to affect such change alongside the original premise, thus squeezing as much as possible out of an activity.
At a recent Physics Revision Day for local schools, I experimented with capturing some audio using MSC questioning and tidied up the recording with the Spotify Podcaster App. With a lack of the university-level resource needed to facilitate MSC rigorously, this kind of ad-hoc data capture could be a way forward for much of our educational partnership projects. All that is needed is a phone and 15 minutes editing: I'd love feedback on the recording, which you can click below:
Now back to Raven, my Most Significant Change...
Comments
Post a Comment