It is interesting that a methodological debate is emerging around our randomised controlled trial (RCT) of music therapy for depression. Reference Erkkilä, Punkanen, Fachner, Ala-Ruona, Pöntiö and Tervaniemi1 Sen and colleagues could have used any RCT of a psychosocial intervention to discuss their ideas of alternative designs. In relation to our specific study, they raise the following three main points: (a) that our study was not double-blind; (b) that patients may have had a preference for music therapy; (c) that the experimental group may have been followed up more carefully than the control group. We will respond to these points in that order.
First, studies of psychosocial interventions such as music therapy can never be double-blind. Both the therapist and the patient are aware of the therapy they are providing or receiving, and active participation of the patient is necessary. Therefore, demanding a double-blind study shows a limited understanding of the nature of these therapies. We do not always agree with the opinions of Seligman, Reference Seligman2 but he has put this point very aptly: ‘Whenever you hear someone demanding the double-blind study of psychotherapy, hold onto your wallet.’ Single-blind RCTs are the most rigorous evaluation method that is possible in this field.
Second, the advertisement through which potential participants were recruited to our study did not mention music therapy. Therefore, we believe that a strong preference for music therapy was unlikely in our sample, although we are not able to completely rule out the possibility. Extensions of RCTs such as Zelen’s design Reference Zelen3 and partly randomised designs Reference MacLehose, Reeves, Harvey, Sheldon, Russell and Black4 are not new. They provide interesting options for evaluating many kinds of intervention, including music therapy. However, there are also some good reasons why they are not used more frequently. For one thing, as Sen et al note, hybrid designs may be difficult to interpret. For another, the questionable additional merits of these trials may not justify their much higher costs. Our trial was the first of its kind, and a simple randomised design therefore seemed most appropriate to us. For future trials of psychosocial interventions it may be relevant to explore the potential use of hybrid designs.
Third, in our study, the person who did the assessments, and who also scheduled the assessment interviews on their own, was masked to treatment assignment, and only very few instances of broken masking occurred. We can therefore exclude the possibility that the experimental group might have been followed up with greater care than the control group. Our conclusion remains that the differences in drop-out rates were an effect of the treatment, not an artefact of the study design.
Overall, Sen et al present interesting general thoughts for the evaluation of psychosocial interventions. Of the various suggestions made for improving study designs, we believe that assessing treatment preference and incorporating it in either the design or the analysis is the most practicable one. Hybrid designs including both randomised and non-randomised elements may be useful in certain circumstances, but because of their high costs and unclear interpretation we would not recommend them for general use.
eLetters
No eLetters have been published for this article.