Barriers such as lack of infrastructure, research funds and resources prevent robust or standardised evaluation of community engagement initiatives, resulting in a piecemeal evidence base. This prohibits augmentation and scale up of existing initiatives, and accurate understanding of the gaps in mental health systems.
Project Khuluma is currently experiencing this challenge. Khuluma is a pioneering support group model run by the SHM Foundation that provides psychosocial support to closed groups of 10-15 adolescents living with HIV/AIDS (ALWHA) via text-message.
Evaluation of the project uses a mixed qualitative-quantitative approach with psychological assessment tools to measure general wellbeing, perceived levels of internalized social stigma, perceived levels of social support and social isolation, and self-reported treatment adherence.
The team at the SHM Foundation are attempting to strengthen the evaluation methodology we use without having to conduct an expensive randomised control trial.
We'll be presenting these findings at the International HIV/AIDS Conference in Durban, July 18th-22nd. Follow us on Twitter during the conference and beyond, @SHMFoundation, to keep up to date our findings and insights. You learn more by visiting our website (shmfoundation.org), or reading our case study on the MESH.
Really interesting to read about the project Nikita and the plans to use a realist evaluation framework.
If you haven't seen it yet you might be interested to explore our bank of resources on Realist Evaluation: https://mesh.tghn.org/articles/category/realist-evaluation/
Good luck with the project!
This sounds like a great mix of methods to really triangulate across and get the most out of all the evaluation data.
My thinking was along the lines it seems like you are already moving - I think a Realist Evaluation framework would really helps in a number of ways:
It would allow you to draw on the distinctive contributions of the different stakeholders involved in the evaluation (thinking here specifically of Pawson and Tilly's notion that their can be a broad 'division of expertise' and insights on the role of stakeholders around different contexts, the active ingredients of the intervention, and the pattern of outcomes).
Realist evaluation would also help you draw out the theories of change behind different aspects of the project, and check them against the data - admittedly never an easy matter when it comes to multi-level processes like stigma, and the subtle dynamics of peer group interactions and psychosocial support.
In addition a RE framework gives you scope to sequence and tack between the qualitative and quantitative components in a way that would seem to fit your existing plans and allow you to get the most out of data collection and analysis.
And of course Realist Evaluation is very interested in outcomes and impact, it just allows exploration of the variations in 'what worked for who and in what circumstances' rather than the black box of a RCT style pre and post comparison. This kind of learning is really needed for projects that attempt to combine socially complex processes (which it seems sometimes get neglected because their impacts are hard to measure).
But anyway, you can see, I am revealing my enthusiasm for Realist evaluation :) but it does seem like it would fit the counters of the evaluation challenge well.
Good luck with the evaluation, I'll be very interested to see the learning from this innovative project.
You are right - our priority is to develop a mixed evaluation methodology for our complex social intervention given that an RCT would not be suitable.
Evaluation of the project uses a mixed qualitative-quantitative approach with psychological assessment tools to measure:
perceived levels of internalized social stigma
perceived levels of social support and social isolation
self-reported treatment adherence
Participants, facilitators and medics are engaged in each of these evaluative components at strategic points in the life cycle of the support group. Specifically, a short 20-25 minute survey delivered at the beginning and end of the project gathers data on general wellbeing, adherence, psychological distress and social isolation of participants. Participants also complete an evaluative working group after the support groups have finished, where qualitative research methods and creative activities empower participants and facilitators respectively to reflect on the process and express their thoughts.
We are also trying to play different evaluation tools off one another. For example, Medical adherence where we are looking at the self reported data from the questionnaire and then analysing a bank of 40,000 SMSs to be able to go into a lot more depth and extract insights on why they may not be taking their meds or how they encourage each other to take their meds.
We are also working on fitting these tools into a realist or impact evaluation framework.
Hi Nikita, thanks for sharing this. Khuluma looks like a really interesting initiative. I think your comment about the challenge of resources to devote to evaluation will resonate with many others on MESH. I know you are presenting findings at the Durban conference, but wonder if you could say, in a nutshell, how you have strengthened the methodology without as you say, conducting an RCT.
I ask this, since, for the kind of intervention you are developing it seems like an RCT may not be the most appropriate framework anyway, given the social complexity of the intervention and the need to address differences of context when you are looking to generalise learning from the project. Can you say a little bit more about the solution you came to?