On Wednesday 31st July, #PHTwitJC discussed:
GARVIN E, Cannuscio C, Branas C. (2013) Greening vacant lots to reduce violent crime: a randomised controlled trial. Inj Prev 19:198-203 doi:10.1136/injuryprev-2012-040439.
The full text can be accessed at this link: http://injuryprevention.bmj.com/content/19/3/198.full
The discussion was structured around the five questions in the previous paper summary post.
1.Were the aims of this study clear? (consider whether it clearly defined the population, intervention, outcome and comparison)
The participants felt that the aims of the paper were clear and justified by a strong introduction section.
A few words used in the paper were new to participants – potentially due to the study using US terms – but they were either well defined in the paper or were easily understandable.
“I thought that the aims were clear too – ‘greening’ was new to me but nicely defined” – @duncautumnstore
“The terminology felt very American – would we call them ‘lots’ in UK? What would we say instead? ” – @carotomes
2.Was the methodology appropriate?
Randomised controlled trials (RCT) are one of the strongest study designs, and it was agreed that it was an appropriate method here.
“RCT was appropriate to assess causality between greening and crime, although unsure whether truly randomised…” – @carotomes
Participants noted that there was some ambiguity about the randomisation method, and it was hard to pick out from the paper how many clusters of lots were ultimately randomised.
“E.g. P199 each cluster lot was viewed by PHS to determine if appropriate to send for greening authorisation? I didn’t fully understand this – seemed like a vetting process prior to randomisation, also unclear of randomisation method.” – @carotomes
“yes, it was a little unclear. I couldn’t work out how many clusters were randomised in the end” – @duncautumnstore
There was some discussion about whether the method used to group lots meant that the design was closer to a cluster RCT. It was noted that this would still be a strong design, but that that also more information should be provided when conducting a cluster RCT.
“I wondered that. Or whether it was closer in design to cluster-RCT” – @duncautumnstore
“absolutely – it was a cluster RCT rather than RCT. But no discussion of heterogeneity assessment” – @carotomes
Several of the difficulties of conducting an RCT on a built environment intervention were also discussed, as these may account for some of the compromises in study designed.
“not often you get the circumstances to do an RCT on the built environment, so maybe some compromises had to be made.” – @duncautumnstore
“[Pennsylvania is] big enough to fit a few more clusters into the trial… perhaps financial reasons, but we don’t know from the paper” – @duncautumnstore
3.Did the analysis use appropriate methods?
There was some discussion over the difference-in-differences method used, and whether any other statistical methods could have been used instead.
Participants found the difference-in-differences method intuitive, however, in the experience of the participants it rarely appeared in statistics text books. This made it harder to judge what the strengths and limitations are of using difference-in differences in this paper.
“I’d not come across difference-in-differences before. Not sure on its strengths and the right circumstances to use it.” – @duncautumnstore
4.Do you believe the results? The study reports non-significant reductions in crime and improvement in perceptions of disorder, but also a significant improvement in residents perceptions of safety, what could explain these findings?
The participants discussed that reductions in crime from green spaces seemed plausible, but given that there was some uncertainty over the methodology used in this study it was difficult to be more confident about some aspects of the results.
“I don’t know if I can be confident as I’m unclear about their methodology and analysis” – @carotomes
The authors also highlighted the preliminary nature of these results in the paper, and aspects of this were discussed by the participants.
“I get the feeling that the authors were a little tentitive from drawing too many certain conclusions too” – @duncautumnstore
“[the] authors themselves recognise the limitations of small sample and unrobust data” – @carotomes
“they use phrases like ‘preliminary evidence’ and advocated larger trials” – @duncautumnstore
The participants identified the potential for bias in the way that the questionnaire was used to measure the resident’s perceptions of safety.
“survey on perceptions of safety and violence primes interviewees of research purpose!” – @carotomes
5.What implications do the findings have for public health practice & policy?
The discussion of the implications referred back to the tentative nature of the findings. It was suggested that the focus of the paper could have been on the usefulness of the method rather than the results.
“feels like focus on paper should’ve been ‘is this method practical to test Q’ in prep of larger trial” – @carotomes
Research findings are essential for advocating for good public health, but one participant felt that the results from this paper weren’t striking enough to help advocate for green spaces.
“an important methods paper, but I don’t think the results are striking enough to be the one used to advocate for green spaces” – @duncautumnstore