Summary of #PHTwitJC 24 Chat – Mental health discrimination: impact of Time to Change

 This story looks better if viewed in  

Mental health discrimination: impact of Time to Change

On 22nd April 2013, Public Health Twitter Journal Club #PHTwitJC discussed a research paper on experiences of discrimination among mental health service users in England, which was part of the evaluation of a major campaign to tackle mental health stigma.

  1. We discussed this paper by Corker et al (2013). An introduction to the paper with some background is available on the Public Health Twitter Journal Club (PHTwitJC) blog page. If you want any further information about PHTwitJC you’ll also find it on the blog.   A full transcript of the chat can be accessed here or in the blog Archive.

    The paper reports on one of the strands of evaluation for the Time to Change project:
  2. Our discussion

  3. Question 1. Does the study address a clearly focused question/issue? Is the study design appropriate to address this?

    Participants agreed that there was a clear focus for the study, but it was felt that this was a difficult question to address:
  4. Yes, focused Q: did @TimeToChange achieve a 5% reduction in discrimination towards ppl using mental health services #PHTwitJC
  5. #phtwitjc think it was a clear aim, although not an easy one to design a study to evaluate maybe.
  6. @PHTwitJC study aim is clearly aligned to @TimetoChange target, however study design doesn’t determine if #TTC reps for reduction #PHTwitJC
  7. The Time to Change (TtC) programme identified three major sets of goals and outcomes, the first of which was:

    significantly increased public awareness of mental health (an estimated 30 million English adults would be reached), a 5% positive shift in public attitudes towards mental health problems and a 5% reduction in discrimination by 2012.
    As @carotomes and @Mental_Elf pointed out, the study was explicitly linked to the latter target, although it would be difficult to identify a causal link (this is fully acknowledged in the study).
    The study design was a cross-sectional telephone survey (the ‘Viewpoint’ study) with people identified as having recently used specialist mental health services. The survey was repeated, using different NHS trusts from across England, in 2008 (baseline), 2009, 2010 and 2011.
    Participants felt that this design had limitations, but was justifiable for pragmatic reasons (cost and scale of alternatives such as a cohort design):
  8. @duncautumnstore yes, exactly, was thinking all the way through it that using prospective cohort might have solved some issues… #phtwitjc
  9. @duncautumnstore but then also might create more issues as not everybody wld have contact with MH services in any given year. #phtwitjc
  10. @rorymorr prob best design though the probs you say and £s. Always a little wary of drawing too much from repeated cross section #PhtwitJC
  11. 2. Were the methods of selecting and recruiting to the study adequately described; were they appropriate?
  12. Summary of the design: Five NHS mental health trusts were selected for each round of the survey. They were selected to reflect the socio-economic spectrum (e.g. using index of deprivation quintiles) across England. Staff within each Trust used their records to identify a random sample of all adults (aged 18-65) with a mental health diagnosis who were in receipt of ongoing treatment (contact with specialist services within the past 6 months). Clinical staff within the Trust then checked and screened the sample.  Information packs were mailed out to the sample, and telephone interviews were scheduled after receipt of consent forms.

    The use of existing health service data to identify participants was regarded as a positive factor:
  13. @PHTwitJC yes, thought it was smart – nice to see NHS routine data being put to good alternative use too! #PHTwitJC
  14. However, journal club participants noted that slight alterations were made for each repetition of the study, in addition to the use of different geographical areas. For example, the sample size doubled after the first year as a result of disappointing response rates; after 2008, information packs were provided in community languages; in 2011 only, an incentive (£10 voucher) was offered for participation.
  15. @carotomes @duncautumnstore a lot of interpretive prblms stem frm slight changes that cld happen to sampling frame each year #phtwitjc
  16. @rorymorr @duncautumnstore true. Would it be better to stick with same areas every year? (Risk bias?) #PHTwitJC
  17. These changes were seen as justifiable in the main: the aim was to enhance recruitment and avoid the study being underpowered.

    Different trusts were used to enhance the generalisability of the findings; we discussed this strategy:.
  18. Did selecting different trusts/ areas each year make it more representative of popn? Any probs doing that?
  19. @PHTwitJC interesting q! Are there any systematic differences between trusts/areas? I’m not sure. #PHtwitJC
  20. @PHTwitJC initial thought was danger that using same MH trusts would make them aware of being measured >> inaccurate picture #PHTwitJC
  21. A specific aspect of sampling, clinical staff screening out service users who were ‘judged to be at risk of distress’, was queried as a possible source of bias:

  22. @PHTwitJC as opportunity exists there for screening out ‘problem’ service users who cld have had the worst discrim experiences #phtwitjc
  23. More information on the criteria used was requested, in order to assess the subjectivity of this approach.

    @carotomes commented that there is always likely to be some recruitment bias in mental health studies, because of stigma preventing some from using services at all.
  24. Question 3. Was the sample adequate & representative of the population (mental health service users in England)?
  25. Table 1 of the article outlines the participant characteristics for each year of the study: numbers; gender; age; ethnicity; employment status; diagnosis; and whether they had received involuntary treatment. Numbers recruited in the first year were much lower than in subsequent years (537 compared to ~1000 in the later years), probably due to changes in the sampling and recruitment process (mentioned above).   It was felt that the low response rates (8-11% of the sample) were not particularly surprising in the context of generally expected phone survey response rates, plus the characteristics of the target sample.

    Representativeness of the other characteristics varied:
  26. @PHTwitJC participant gender and age looked well balanced, however Dx employment and ethnicity less so #PHTwitJC
  27. Some of these differences were taken into account in the analysis:
  28. @carotomes @PHTwitJC I think they say they weighted sample by age, gender, ethnicity… but other stuff more free to vary #PHTwitJC
  29. However, some were not explicitly – we picked out employment status and mental health diagnosis as differences that may have impacted on the results in unknown ways. Our discussion particularly focused on the differences in proportions of participants with specific mental health diagnoses across the years:
  30. @PHTwitJC in particular, bipolar % reduces in sample as depression % increases – could be those with depression less likely to 1/1 #PHTwitJC
  31. @carotomes @PHTwitJC yes, good point, didn’t notice that… also the ‘other’ dx increases quite a lot 08->11. #PHTwitJC
  32. @PHTwitJC I feel variation in clinical Dx between years is problematic. Different Dx >> different discrimination… #PHTwitJC
  33. It was suggested that a statistical model could have been developed to take account of all of these differences. It would also have been useful for the reader, to see a breakdown of the results (discrimination experiences) by diagnostic group, which is not offered in the paper.
  34. Question 4. Is the survey tool (DISC) likely to measure appropriately; is it a valid tool?  

    The Discrimination and Stigma Scale (DISC) was used for the survey (more information in the paper, and background here). It was noted that the results as represented in the paper, lost the nuances of the scale. Results were presented in a binary (experienced stigma in this area of life in the past year or not), rather than reflecting the scale which indicates frequency:
  35. @PHTwitJC use of DISC was fascinating! Turning the answers into a yes/no, lost a load of data on frequency of discrimination! #PHtwitJC
  36. @duncautumnstore yes thought that was weird, but not familiar with DISC so was ‘hmm, well if you are sure that’s ok to do..!’ #phtwitjc
  37. @rorymorr not familiar either: seems to change the research question to reduction in % people discriminated, rather than amount #PHTwitJC
  38. Question 5. Results: are these presented adequately so that the reader can assess their validity & reliability? Are they adequately explained (any confounders)?
  39. Discussion continued from the previous point, relating to the nature of the 5% target and the research question:
  40. @duncautumnstore would it have been useful to at least see the ‘amount’ results as well as the ‘ever experienced’ ones? #PHTwitJC
  41. @PHTwitJC it depends on how you want to measure progress towards a 5% reduction in discrimination though: people or overall amount #PHTwitJC
  42. Table 2 lays out participants reporting types & areas of discrimination; direction of change between 2008 and ’11. This was picked out as unhelpful for a full interpretation of the results, because of a selectivity in what was presented:
  43. found it really lousy that table 2 doesn’t have each year of data, only 08 and 11 – you can’t assess trend or variability easily #phtwitjc
  44. #phtwitjc which makes me worry that it is reported like that because the numbers for other years are all over the place (cynical, I am)
  45. @rorymorr there are no excuses really, plenty of room for more data if they squidged up the columns… #PHTwitJC
  46. The main finding of the study was that there was a statistically significant drop in discrimination experiences over the period, but that it fell short of the 5% target.  We discussed whether this finding was valid / significant.
  47. OK authors claim a statistically significant reduction in discrim experiences, do the data support that? #PHTwitJC
  48. Sampling frame differences, and the lack of data presented that took into account differences, were seen as problematic:
  49. @PHTwitJC unsure results are convincing that experiences of discrimination have reduced – stats not accounted for Dx, employment.. #PHTwitJC
  50. The ‘5% reduction’ target that is being evaluated was itself discussed and felt to be somewhat arbitrary and ambiguous:
  51. @PHTwitJC @duncautumnstore I would like to know where 5% target came from! What was rationale? #PHTwitJC
  52. @carotomes @PHTwitJC good q! Was it programme designers or political aspiration – 5% is the smallest large number?… #PHTwitJC
  53. @duncautumnstore @PHTwitJC was it because it sounds good, 5% “feels” like it would’ve made a difference? Or more scientific?! #PHTwitJC
  54. 6. Overall evaluation / implications for policy and practice

  55. Overall the journal club participants were positive about the study, although it had limitations; and found it a very interesting attempt to evaluate an ambitious population level intervention. We agreed that the evaluation project as a whole (reported within the same journal special issue) was innovative in its multistranded design and deserved approaching as a whole, in order to properly appreciate the impact of the TtC intervention.
  56. @PHTwitJC study highlights the difficulty in measuring discrimination – but makes improvement on previous studies #PHTwitJC
  57. @PHTwitJC does an ok job but a lot of limitations… I think gets a ‘maybe’ for TTC impact on discrimination #phtwitjc
  58. TtC is a fascinating attempt to approach #MH from a mass /PH point of view – measurement always be difficult#PHTwitJC
  59. @PHTwitJC quite hard to directly link the change to the campaign as I think @carotomes pointed out earlier, but… 1/2 #PHTwitJC
  60. @PHTwitJC this isn’t the only evidence. Think all the papers together are a good eg of a multistranded evaluation! 2/2 #PHTwitJC
  61. @PHTwitJC #TTC exciting and groundbreaking campaign; looking forward to reading the other studies! #IHeartPublicMentalHealth #PHTwitJC
This entry was posted in Summaries. Bookmark the permalink.

One Response to Summary of #PHTwitJC 24 Chat – Mental health discrimination: impact of Time to Change

  1. Pingback: Time To Change stigma and discrimination: evaluation results | Public Health Twitter Journal Club

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s