Publish date: Dec 2016
Addiction Journal Session: Advancing Methodology in Addictions Research
This plenary session was organised by the editor of the Addiction journal Professor Robert West and showcased work towards improving research methods in the field that had been published in the journal.
The first presentation was by Susan Michie, Professor of Health Psychology and Director of the Centre for Behaviour Change at University College London. Susan leads a programme of research developing the science of behaviour change interventions and applying that science to intervention development and evaluation. She began by noting the complexity of interventions to change behaviour, be it smoking, reducing alcohol consumption or losing weight, and the difficulty in replicating and implementing them in practice. Such complexity means that interventions are often poorly reported in the scientific literature, and therefore difficult to combine in systematic reviews of their effectiveness. It is often difficult to know which components are the most effective, and in which circumstances.
Developing more effective interventions will require advances in the scientific methods we use to study and change behaviour, and she went on to describe a method for characterising the content of interventions i.e. the potentially active ingredients. The BCT Taxonomy v1 is a method for specifying interventions in terms of their behaviour change techniques (BCTs) using standardised terms and definitions developed by cross-disciplinary and international consensus. This method has been used to identify effective BCTs in meta-analyses, to use a systematic method for designing and evaluating interventions and to accurately implement interventions found to be effective. Susan gave examples of studies that have used this methodology to identify effective BCTs in smoking cessation and alcohol reduction interventions and to assess the extent to which interventions have been delivered according to protocol. Read more and hear the presentation here
Susan Michie: Characterising intervention content in terms of behaviour change techniques
The next speaker was Dr Emma Beard, Research Fellow in the Department of Clinical, Educational and Health Psychology at University College London. She addressed one of the limitations of traditional statistical analyses, namely the misinterpretation of the meaning of p values. If we run an experiment and obtain a p value greater than 0.5 traditionally we say that we have a non-significant result, but this does not provide evidence for the null hypothesis. All we can actually say is that there is insufficient evidence to reject the null hypothesis and look towards the alternate hypothesis. A p value greater than 0.05 might mean there is no evidence for an effect, but this may also be an example of data insensitivity i.e. there is not enough power to detect an effect. Furthermore, the p value is highly unreliable. Repeat your experiment and you’ll get a p value that could be extremely different, and p is highly unreliable even for very large samples (see this video for an example).
One solution is to use a measure of power – high power means that the data is probably sensitive enough to detect an effect if one does exist. Alternatively, confidence intervals provide a measure of how senstitive the data is. However, Emma’s presentation provided a demonstration of the calculation and interpretation of a third way of addressing this issue – the use of Bayes factors. As she demonstrated, Bayes factors are the ratios of the likelihood of a specified hypothesis (e.g. an intervention effect within a given range) to another hypothesis (e.g. no effect). They are particularly important for differentiating lack of strong evidence for an effect and evidence for lack of an effect. Emma's presentation provided an introduction to the theory behind Bayes Factors and a user-friendly way for calculating them. This was illustrated by a review of randomized trials published in Addiction between January and June 2013 which aimed to assess how far Bayes factors might improve the interpretation of the data.
The final speaker was Dr Jo Neale, Reader in Qualitative and Mixed Methods Research based within the Addictions Department at the Institute of Psychiatry, Psychology and Neuroscience, King’s College London., Jo began by highlighting the long history of qualitative research within the addictions, but as the Senior Qualitative Editor for Addiction she is more aware than most that it accounts for only a small proportion of the journal’s output. If the road to publishing qualitative research is therefore a little bit harder than other methodologies, it is important that qualitative researchers are as explicit and transparent as possible in explaining the methods that they have used. The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly.
In this presentation she provided a step-by-step outline of ‘Iterative Classification’ (IC), a simple but rigorous technique for an analysing qualitative textual data, developed within the ﬁeld of addiction. IC is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. The presentation slides and audio are available here. A paper describing the process in detail is available here.
Jo Neale: Iterative Categorization (IC) - a systematic technique for analysing qualitative data
The opinions expressed in this commentary reflect the views of the author(s) and do not necessarily represent the opinions or official positions of the Society for the Study of Addiction.