Cognitive change over time in psychosis: is decline continuous, generalised and specific to schizophrenia? Despite recognition of the profound impact of cognitive dysfunction on prognosis in psychotic illness, these questions have largely been unanswered, with relatively few studies tracking individuals longitudinally over many years. Jolanta Zanelli et al address this by following up just over 100 participants with first-episode psychotic illnesses (65 with schizophrenia), using a broad neuropsychological battery at initial presentation and again a decade later. Compared with a matched healthy cohort, all those with psychosis had baseline cognitive deficits in IQ.Reference Zanelli, Mollon, Sandin, Morgan, Dazzan and Pilecka1 Those with schizophrenia showed deterioration in IQ over time, with increased deficits in verbal knowledge and memory, but no further changes evident on executive functioning or processing speed. In those with ‘other psychoses’, subsequent change was limited to verbal learning.
The findings support the ‘IQ decline hypothesis’– namely that there is a drop in functioning over time. However, they go against the ‘generalised decline’ theory: changes were not equal in magnitude across domains tested, and varied between schizophrenia and other psychoses. Symptom severity was associated with the degree of change, but only in those with schizophrenia, and interestingly, the use, duration or type of antipsychotic medication had no effect on changes in cognition. The results remind us that cognitive functioning is a key factor for clinicians to consider, especially that some aspects are more prone to decline and might have an impact on the support individuals require. In a world moving away from ‘schizophrenia’ to a ‘psychosis spectrum’, it is also a prompt that not all psychoses are the same.
‘Non-specific effects’ is a common throwaway phrase in research, yet, like ‘placebo-effect’, something positive is happening to patients so shouldn't we better understand this? The phrase applies to anything not directly intended by a theoretical model or treatment, for example the manner in which we engage or speak to a person. Priebe et al reviewed the literature across a diverse range of psychiatric treatments.Reference Priebe, Conneely, McCabe and Bird2 Although the research assayed was quite heterogeneous, clinician communication was a key non-specific aspect, clustering into verbal and non-verbal components. The former included initial contacts, empathy, clear communication and clinicians picking up cues about unspoken worries; the latter, factors such as clinician warmth, listening, a positive tone of voice, and pro-social postures. How treatments were framed emerged as important, although there were interesting differences here insofar as there was some evidence that patients new to services appreciated a more optimistic pitch, and those already in contact with services favoured a more tempered approach. Shared decision-making about treatment and care was important, and encouragingly there were data showing this to be viable and productive even in those detained involuntarily.
These non-specific factors have more of an impact on what the authors call ‘process measures’, which are issues such as the therapeutic relationship, patient satisfaction and adherence, rather than clinical measures such as symptom relapse. Crucially, the small literature that exists on the topic suggests that brief training courses can enhance these non-specific elements in clinical contacts, leading to better outcomes. The paper taps into a collective wisdom we all share from our own practice, but highlights how little this is subjected to scientific scrutiny in measuring impact or aspects that are more or less effective. Further, our continuing professional development and training typically emphasises accrual of more ‘factual’ knowledge, and far less, it would seem, enhancement of these key skills that clearly benefit patient care.
In addition to elevated levels of corticotropin-releasing factor (CRF), those with post-traumatic stress disorder (PTSD) show several alterations of the glucocorticoid system linked to the symptoms and severity of the disorder. Glucocorticoid-induced leucine zipper (GILZ) is a transcription factor activated by stress markers, shown to have an impact on hippocampal and cortical dendritic spine integrity, and is used as a reliable indicator of glucocorticoid pathway sensitivity. Looking to elucidate the role of GILZ, Lebow and colleagues used a transgenerational model to induce PTSD in mice.Reference Lebow, Schroeder, Tsoory, Holzman-Karniel, Mehta and Ben-Dor3 Doxycycline (dox) was administered via drinking water to an experimental group of pregnant females once in late term, a time known as a critical window for stress reactivity and impact on epigenetic programming. Avoiding the stress of handling, which often confounds experiments like this, the dox activates a previously inserted lentivirus vector and causes a continuous overexpression of CRF. Although they delivered early, there was no impact on maternal behaviour in the dams. However, their male pups showed an early dysregulation of the glucocorticoid system. Pups were undisturbed until adulthood, at which point a portion underwent a stress-enhanced fear learning paradigm and behavioural tests to identify those that were ‘PTSD-like’. Although the prenatal stressor had no impact on the baseline anxiety of the mice, it did increase the likelihood of PTSD-like behaviours after the adult trauma in males, but not females. GILZ messenger (m)RNA and methylation level reductions in amygdalar tissue were evident and corresponded to the number of stressors experienced, again only in males. Finally, as a confirmation of findings, the authors silenced GILZ in the amygdala with RNA interference in adulthood, which mimicked the double exposure to stressors in the PTSD induction and caused corresponding PTSD-like behaviours in the mice.
Following up in humans, the authors explored the way GILZ interacts with early-life stress, multiple stress exposure and current diagnosis by recruiting a subset of 435 participants from the Grady Trauma Project. Gene expression and DNA methylation were measured via microarray and clinical assessments were performed including a modified PTSD Symptom Scale, Clinician Administered PTSD Scale and the Traumatic Life Inventory. GILZ mRNA and methylation levels correlated with current PTSD diagnosis, severity of abuse exposure and number of traumatic incidents in men. GILZ is located on the X chromosome, leaving males more vulnerable to the impact of alteration, these animal and human data suggest GILZ is an epigenetically regulated quantifier of accumulating stressful or traumatic experiences across a lifetime in men. Representing a susceptibility to the development of PTSD, GILZ could be measured in those with a known history of trauma as a way to target preventative measures in the vulnerable.
The cultural anthropologist Margaret Mead was not a clinical trialist but her statement ‘Always remember that you are absolutely unique. Just like everyone else’ might have been apt. There has been much debate over the years about randomised controlled trials (RCTs) only capturing average effects of a treatment in highly selected samples that bear little resemblance to the ‘real patients’ clinicians see in everyday practice. A related idea was recently put forward by Krauss who analysed the ten most cited RCTs and concluded that trials ‘inevitably produce bias’ by virtue of participants not being truly equivalent between arms of a trial and neglect to explore alternative factors that contribute to their main outcomes.Reference Krauss4 There is a counterpoint to Krauss, in Harrell's blog.Reference Harrell5 Perhaps more than other specialities, psychiatry has reason to hope that differential or heterogeneous treatment response is real because we cannot yet explain why two people derive some or no benefit from the same medication or intervention. One seductive and visual illustration is Simpson's paradox where, for example subgroups of a sample (say, people aged 60 to 70 years) show a positive benefit with a hypertensive drug, but when analysing for an effect over all ages (the whole sample) there is no overall effect of treatment. In this case, there is a differential effect of treatment when conditioned on another variable (age). Statistically, what we really desire is to understand patient × treatment interactions, but often, we do not have adequate trial design or data (we need expensive repeated cross-over designs to establish this). Worryingly, we are likely to be seduced by methods that promise us a way to identify who will (or will not) benefit; perhaps the most familiar being ‘responder analysis’ based on subgroups of patients that showed a response above or below a dichotomising threshold. And now, we have personalised medicine facilitated by a boom in technological approaches including mining electronic health records, wearable devices and the application of machine learning where (perhaps overoptimistic) bold claims are made, such as Perna et al’s statement that ‘Theoretically, predictive tools may be developed for nearly all clinically relevant questions, assisting clinicians when making decisions with patients’.Reference Perna, Grassi, Caldirola and Nemeroff6
So, before we get excited about personalising treatments, we should probably look for evidence that patients actually do respond differently to treatments? In the context of antipsychotic treatments for psychotic disorders, this is what Winkelbeiner et al described as follows: ‘An assumption among clinicians and researchers alike is that the response to antipsychotic drugs by patients with psychosis differs considerably between individuals’ and they set out to examine this by meta-analysing 52 RCTs of antipsychotics.Reference Winkelbeiner, Leucht, Kane and Homan7 The rationale behind their approach is this: in both the control and treatment arm of a trial, the spread of pre- and post-treatment symptom scores is attributable to sources that include the within-participant variation. But in the treatment arm there is an additional source of variation attributable to patient × treatment interaction effect (if there is one). So, one might reasonably assume that if the treatment arm shows more variation compared with the control arm then this would be some evidence for variation in individual response. Winkelbeiner et al derive a log variability ratio to measure this contrast in variation over the 52 RCTs. Here is the punchline: rather than a relative increase in variability (suggestive of individual response) they found lower variability in the treatment versus control arms. Further, looking at each individual antipsychotic, they found the same pattern. They helpfully conclude by reminding us that RCTs ‘… provide unbiased estimates of the relative efficacy of an intervention, which even the largest observational studies cannot provide’ (emphasis added) and further, they counter the ‘placebo response’ by stating that if such effects were occurring they would (by virtue of randomisation) be present in both control and treatment arms and would cancel out.
Finally, we like to think of Kaleidoscope as the No Spin Zone, not least as we are all avid Fox News fans. How much spin goes on in the abstracts of scientific articles? Does authors’ ‘amusing’ use of ‘mind the gap’ and inane song lyrics in paper titles bedazzle us away from an oversell on the abstract front? Although research conventions and standards set out how RCTs’ results should be reported, these do not apply to abstracts. Is this lack of consensus and authors’ understandable desire to highlight the merits of their work in the shop-window of those opening 250 words too tempting to keep to the truth? Jellison et al undertook a cross-sectional review of clinical RCTs with non-significant primary end-points published in six leading psychiatry and psychology journals – including our own BJPsych – between 2012 and 2017.Reference Jellison, Roberts, Bowers, Combs, Beaman and Wayant8 Unlike Bill O'Reilly, they defined spin as ‘use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results’. Their paper included 116 RCTs, with spin found in 56%, most commonly in the abstract results and conclusion sections. Interestingly, there was no relationship between industry funding and spin. The findings matter: we are all guilty of skimming papers via just reading the abstracts – and in part, that is what the abstracts are for – and you come to Kaleidoscope because you are too lazy to do your own in-depth literature review each month, right? The authors suggest establishing standards for abstracts and actively inviting reviewers to comment on the presence of any spin in papers assessed; we are happy to report we found none in theirs.
eLetters
No eLetters have been published for this article.