Category: RCT

Conference of Cochrane Evidence: Useful, Usable & Used #CE3U

My journey with Cochrane started one summer afternoon in 2010, when I interviewed a Tallaghtdoctor (Tallaght is a rough suburb in Dublin, Ireland) about treatments for drinking problems of people who also use other drugs. I emphasized that brief psychosocial interventions were the treatment of choice for patients who don’t use other drugs and that there’s no reason why this should be different for drug users. He asked me whether I was Swedish, because of my accent, and replied by a single question which kept me awake at night and started my career as an addiction investigator: “Does it work?” I decided to celebrate the four years of trying to find an answer to his question at the Cochrane conference in Manchester, UK.
Wednesday 23rd April 2014
This year’s conference of UK and Irish Cochrane contributors’ swapped plenaries and workshops – Wednesday kicked off with two sessions of developmental workshops. The motto of the priority setting workshop was  “Don’t start a journey that you can’t finish”. Pragmatism is a very important part of priority setting. The value of setting priorities in healthcare is the expected gain from reducing the uncertainty. In another words, to reduce the probability that somebody somewhere is getting a wrong treatment.
Figure 1. Bees were the theme of Cochrane conference
The key question of the public health workshop was How to produce good reviews quickly? Growing number of people are interested in doing reviews under the public health group. Most public health studies are non-randomised. Evidence forms just one part of the complex process of public health policy – timeliness is the big factor. The idea of local context permeates all policies – is this relevant to your local area? All of us, as Cochrane reviewers, give shades of grey and they [policy makers] want black& white answers.
The first afternoon plenary started a faithful member of the Cochrane family, Nicky Cullum. She described how easy reviews were in the past. Her talk inspired 12 new tweets in the first 5 minutes of the plenary (#CE3Useful). The beginning of Cochrane nursing group was accompanied by skepticism “Are RCTs possible in nursing?  Is experimentation at odds with caring?” The explosion of nursing trials in the recent years posed new challenges “How on earth do we help non-academic clinicians to have both clinical and academic career?” Trisha Greenhalghconcluded the first with provocative lecture about boringness of Cochrane reviews. She used the example of young doctor Archie Cochrane in a German camp to demonstrate that the art of rhetoric consists of logos, ethos and pathos. Her other work on how innovations rise and how they spread further supported the rhetoric argument. While a logo is the only thing in scholarship rhetoric, factual knowledge can be rarely separated from ethical or social context. By trying to do so, the Cochrane researchers are stripping away the very thing they need to be exploring – how to change the world through science. The methodological fetishism developed in Cochrane collaboration (linked to control, rationalism and accountability) hinders production of more realist and interesting reviews.
Thursday morning plenaries helped the delegates to confer after the gala dinner last night. Rich Rosenfeld, a Professor and Chairman of Otolaryngology, explained how Cochrane reviewers can help policy makers by rapid reviews – Good is ok, perfect we don’t need [for guidelines]. A health economist, Karl Claxton, continued the discussion on when no more evidence is needed. Research takes long time and evidence that we already have can inform allocation of research funds for new projects. However, we should be cautious about judging the usefulness trials with hindsight, it’s wrong because we don’t know the context. Neal Maskery made the audience “lol” with very entertaining and interactive plenary which focused on what we know about how people make decisions. Our brain is so good at patterns recognition – it wants to do it all the time. This phenomenon is called Base rate neglect – a cognitive bias. Biases such as this one hinder innovation and affect our decisions in all areas from buying a car to prescribing medicines. Al Mulley, an expert on shared decision making, finished morning lectures with a story of how every patient brings their own context by using examples from his research on how bothersome is urinary dysfunction.

The special addition to the conference was presence of Students 4 Best Evidence, some of whom won prizes from UK Cochrane centre, including free travel and conference participation. Read more about their winning entries on prostatecancer, dentalhealth, smoking, and long-term illness.

From a personal perspective, starting a Cochrane review took me on a journey which led from a clinical question (from the Tallaght doctor), to policy development, medical education and further research in a very short time. I still don’t know whether counselling works for drink problems in people who also use other drugs, but I’ve learned how to find an answer using the Cochrane methodology.

Beg, steel or borrow: getting physicians to recruit patients in clinical trials

Leaflets, adverts and phone calls have all been used to recruit patients in clinical trials with some results. Still, the personal contact remains the most reliable method, if you can get the recruiter to do it. In this post, I explore some of the barriers of clinicians’ recruitment activity in randomised controlled trials.

Lack of time, specialist staff and patient motivation are the most frequently reported barriers that prevent clinicians to recruit their patients into clinical trials. Even though the physician signs up for the study and is informed about what is involved, they often do not complete the job. Some are distracted by competing clinical priorities, while others cannot get a positive answer from their patients. Regardless of the reason, the research suffers because of low participation numbers and prolonged study set-up.
iStockphotos.com


Researchers from the University Of Birmingham, UK, looked at all ways that improve the clinicians’ recruitment activity. Their systematic reviewof scientific literature compared the impact of different recruitment strategies and underlying clinician attitudes. To recruit successfully, the clinicians should be incentivised or supported in some way. Unfortunately, many researchers use supports that don’t work. What’s more worrying is that nobody knows how to boost clinicians’ recruitment rates. The study authors recommend that each clinical trial uses qualitative methods to ask clinicians what would work for them and use their suggestions. Another issue was what clinicians think of clinical trials. Misconceptions about trials methods still prevail and clinicians do not see the positives of trials; nor do their patients. Improved education and communication from researchers to physicians can overcome these issues.

Paying research participants for taking part can increase the number of people who agree to take part in the study, the so-called consent rate. It has become a norm in the Western world studies. Still, some studies and countries are unable to provide financial incentives to patients who volunteer for research. Direct payments may also be viewed as introducing unwanted bias into research results. Some may think that people who get paid for research would not participate if they did not get anything. Human motivation is a mysterious subject and money is part of it. It is the currency of modern society.

Is it ethical?

The healing relationship between the patient and doctor can be viewed as unsuitable for recruiting patients into clinical trials. Patients may feel obliged to agree, without making a fully informed decision. Ideally, the recruitment should be done by someone who isn’t involved in patients’ care; however, this is often not feasible in the real life. On one hand, the participants should make an informed decision about their participation and decide voluntarily. On the other hand, the researchers should not surprise patients who attend medical services for non-research purposes. The way to overcome this problem is through a two-stage recruitment process, as used in our study. The first step is to give information. The care provider gives a leaflet with information about the study to potential participants. The person goes home and reads the leaflet at their leisure. When they come to see their doctor next time, they can ask questions about the study, and decide to take, or not to take, part in the study.

Recruitment to randomised trials will probably always remain an issue for science. With an open mind, the investigators and clinicians can seek better solutions for creating trials that would attract human participants and help advance science for the benefit of all.

Cited articles:
Ben Fletcher, Adrian Gheorghe, David Moore, Sue Wilson, Sarah Damery: Improving the recruitment activity of clinicians in randomised controlled trials: a systematic review. BMJ Open 2012;2:1 e000496 doi:10.1136/bmjopen-2011-000496

Klimas J, Anderson R, Bourke M, Bury G, Field CA, Kaner E, Keane R, Keenan E, Meagher D, Murphy B, O’Gorman CSM, O’Toole TP, Saunders J, Smyth BP, Dunne C, Cullen W: Psychosocial Interventions for Alcohol Use Among Problem Drug Users: Protocol for a Feasibility Study in Primary Care. JMIR Res Protocols 2013;2(2):e26
doi: 10.2196/resprot.2678

Recruitment shock

3.6% response rate? Shocking! For our new feasibility study, we sent over 200 invitations to primary care doctors in Ireland and the invitees sent us back a very strong signal. “We are not interested”, or “we are too busy”, or “we don’t have enough eligible patients”? Whatever the reason, the message remained the same: No, thanks.

The primary objective of our study, as for most feasibility studies, is to estimate numbers needed for a definitive trial. We want to know how many people should be invited into the study; of those, how many should be randomized; of those, how many will stay until the end. Right from the beginning, we were faced with a question whether we can recruit enough people for a fully-powered experiment.

Statistical power

Power in research experiments is about finding the truth. Experimenters want to know whether their drugs or treatments work. If the drug or treatment works and they give it to a group of people, some of them will improve, some won’t. There’s a lot of chance and uncertainty in any drug or treatment administration. If we want to know the truth beyond the effects of chance, we need to give the drug or treatment to the right number of people. There’s a formula for it, known to most statisticians. It depends on many things, like the size of the improvement that you want to observe in the treated group, or other confounding factors. The higher power in a study, the more likely it says true (see, e.g., Dr Paul D Ellis’, PhD site here).
A rule of thumb says that the more people are in the study, the higher the chances of finding a meaningful impact of the intervention. Common sense also tells us that the more people in the trial, the more representative they are of the whole population – the more confidence you can be that your results apply to all; except for Martians – unless you really want to study Martian citizenship.

Solution

The easiest would be to call some friends, doctors, and ask for a favor. This should work, but it’s not really scientific. Or you can shut down the study and conclude that it’s not feasible. Or you can do the study with the small number of interested participants. Or you can send another mailshot, a reminder, to all – sometimes that can help.

Fidelity questions

Clinical trials use elaborate methods to make sure that everybody does the exact thing as they planned. Measuring treatment fidelity is checking the agreement between study plan and practice. Some health problems require complex changes. How to measure fidelity in trials of complex interventions? Here are some ideas for fidelity checking.

The National Institutes of Health established a workgroup for treatment fidelity, as part of their behaviour change consortium (1999). They surveyed each centre in the consortium to find out which fidelity measures they use in trials. The workgroup recommendations span five areas: study design, training providers, delivery of treatment, receipt of treatment and enactment of treatment skills. They are useful for investigators who want to measure and improve their treatment fidelity. The key areas for our study are design, training, delivery and receipt.

Fidelity in our PINTA study

Our feasibility study has several aims. The first is to estimate parameters for a fully powered clinical trial. Secondly, we also want to know whether our intervention works. As a complex intervention, it targets multiple levels – doctor and patient level. We hope to improve doctors’ practices and patients’ health behaviour. Intervention fidelity in a multi-level study means adhering to different guidelines and processes. Our trainers must deliver uniform training to all learners groups. The doctors must provide consistent interventions to all patients in the intervention group.

Availability of personal portable audio recorders, e.g. smartphones, provides new and exciting opportunities for fidelity checking, but it raises some ethical issues. Doctors and other interventionists can easily record their consultations with patients and email them to researchers for fidelity checking, but email is not safe.

To avoid the potential confidentiality breach, the researchers can ring the doctors, give them a one-sentence brief and ask them what would they respond should this patient appear in their next appointment. Recording such phone calls is not a technical or ethical problem; it is not without limitations, though. Telephonic consultation with researcher in the role of patient does not reflect real life consultations and, as such, cannot be an accurate skills check. Doctors may not want to be called and recorded for quality assurance purposes, even if it’s anonymous and does not affect their income or professional standing.

When designing measures to improve treatment fidelity in our study, we have to consider how they will be perceived by our participants and providers. These are the strategies for monitoring and improving treatment fidelity that we plan to use:

Design:

  • Guidelines for primary care providers to manage problem alcohol use among problem drug users
  • Scripted curriculum for the group training of providers

Training:

  • Booster session (practice visits) to prevent drift in provider skills
  • Access of providers to research staff for questions about the intervention
Delivery:

  • Instructional video of patient–doctor interaction to standardize the delivery
  • Cards with examples of standard drinks and scripted responses – to standardize the delivery
  • Question about patient scenario in follow-up questionnaires (telephone contact)
Receipt:

  • SBIRT checklist for providers (process measure)
  • Pre- and post training test (knowledge measure)
  • Patient follow-up questionnaire will check whether each component of the intervention was delivered

Measuring fidelity in trials of complex interventions is important. It is not technically demanding. Ultimately this becomes a question of personal development and credibility – willingness to have one’s skills analysed and improved is the basis of reflective practice.