We discussed a systematic review which set it’s objective to “assess the effects of major multi-sport events on health and socioeconomic determinants of health in the population of the city hosting the event.” Details of the paper, discussion points, transcript and summary can be found in the archive.
We invited the authors to respond to our discussion, and today I recieved the following response from Gerry McCartney, Sian Thomas and Hilary Thomson!
1. Were the aims of this study clear?
It was agreed that the aims were appropriately wide-ranging, but overall clear. The premise for conducting this systematic review was clear – no previous systematic reviews had been conducted, and there is a need to understand the impact/legacy of events for financial justification.
However further clarification would’ve been beneficial on some points. E.g. the distinction between ‘major’ and ‘minor’ multi-sport events, whethere there is a different in impact of multi-sport versus single-sport major events… this was not discussed in the paper.
Response – We defined the included events in the methods: “We included studies of any design that had investigated the impact on the host population of any “one off,” international, multi-sport event focused on a single city or area that took place between January 1978 and January 2008”. That was our definition of major multi-sports events – it’s a learning point for me if that wasn’t clear.
2. Was the systematic review comprehensive?
The search strategy employed was well described and transparent, encompassing a variety of grey literature resources. The breadth of studies included was considered appropriate in order to measure all legacy aspects. Further comments included:
“Exclusions interesting: support for event or new physical infrastructure. Latter partic important in supporting impact #PHTwitJC” @Fibigibi
Response – This is a fair point. We excluded support for the event as we didn’t think that was an impact. We excluded lists of physical infrastructure because we had no way of determining which infrastructure would have been built anyway or what the opportunity cost was. For example, in Glasgow (which is to host the 2014 Commonwealth Games), new infrastructure which has been listed by the event organiser includes the M74 motorway. Funding for this was approved years before the Commonwealth Games was even bid for.
3. Were any adjustments made for study size or quality?
The authors acknowledged the lack of quality evidence identified through their search strategy. They attempted to grade evidence and weight narrative/judgement more strongly on ‘stronger’ evidence – however it would’ve been nice to have the adjustments applied outlined in a table. The narrative summary made it difficult to know how and where any adjustments had been applied.
Although it was acknowledged that the journal word count may have been a factor, suggestions for improvements included reference to a hierarchy of evidence and a demonstration of how grading was conducted using tables or a forest plot.
Response – We didn’t adjust any results – we presented all data which met the inclusion criteria and presented those data alongside the critical appraisal of it. In the narrative synthesis we tried to emphasise the higher quality data and highlight quality issues where relevant. We agree that a forest plot would have been great, but it is only possible to create where you have a standardised effect size in a number of studies.
The closest we got to that was with economic growth – however, the quality of most of those studies was very low because they failed to take into account the opportunity costs, and there was only a very small number of better quality studies which could have been synthesized. To present all the economic data together would have given inappropriate weight to the poorer quality studies and would have been misleading.
Visual representations of the data are very valuable and help promote both transparency in narrative synthesis as well as access to and use of the evidence identified. One method of presenting data where standardised effect sizes are not available for all the included studies is the ‘effect direction plot’ which has been developed since the sport review was completed.
See link or contact Hilary Thomson for more information- publication forthcoming
4. Do you beleive the results? Could anything else explain these findings?
One participant (@rorymorr) pointed out the publication bias was more likely to favour studies which supported the games, therefore further unpublished evidence may be unsupportive of a legacy. No assessment of publication bias was conducted by the authors.
However overall it was felt the results justified the evidence presented. As a group we were very suprised at the lack of evidence regarding legacies!
Response – We agree that publication bias is likely and likely to be biased towards positive results. There was debate amongst the authors about whether we should have been more forthright in saying that this made ‘evidence of no impact’ more likely than there simply being an ‘absence of evidence’ because of this. We could not assess this statistically because of a lack of a single common outcome to create a funnel plot to look for asymmetry.