Category: Pilot study

Recruitment shock

3.6% response rate? Shocking! For our new feasibility study, we sent over 200 invitations to primary care doctors in Ireland and the invitees sent us back a very strong signal. “We are not interested”, or “we are too busy”, or “we don’t have enough eligible patients”? Whatever the reason, the message remained the same: No, thanks.

The primary objective of our study, as for most feasibility studies, is to estimate numbers needed for a definitive trial. We want to know how many people should be invited into the study; of those, how many should be randomized; of those, how many will stay until the end. Right from the beginning, we were faced with a question whether we can recruit enough people for a fully-powered experiment.

Statistical power

Power in research experiments is about finding the truth. Experimenters want to know whether their drugs or treatments work. If the drug or treatment works and they give it to a group of people, some of them will improve, some won’t. There’s a lot of chance and uncertainty in any drug or treatment administration. If we want to know the truth beyond the effects of chance, we need to give the drug or treatment to the right number of people. There’s a formula for it, known to most statisticians. It depends on many things, like the size of the improvement that you want to observe in the treated group, or other confounding factors. The higher power in a study, the more likely it says true (see, e.g., Dr Paul D Ellis’, PhD site here).
A rule of thumb says that the more people are in the study, the higher the chances of finding a meaningful impact of the intervention. Common sense also tells us that the more people in the trial, the more representative they are of the whole population – the more confidence you can be that your results apply to all; except for Martians – unless you really want to study Martian citizenship.

Solution

The easiest would be to call some friends, doctors, and ask for a favor. This should work, but it’s not really scientific. Or you can shut down the study and conclude that it’s not feasible. Or you can do the study with the small number of interested participants. Or you can send another mailshot, a reminder, to all – sometimes that can help.

Fidelity questions

Clinical trials use elaborate methods to make sure that everybody does the exact thing as they planned. Measuring treatment fidelity is checking the agreement between study plan and practice. Some health problems require complex changes. How to measure fidelity in trials of complex interventions? Here are some ideas for fidelity checking.

The National Institutes of Health established a workgroup for treatment fidelity, as part of their behaviour change consortium (1999). They surveyed each centre in the consortium to find out which fidelity measures they use in trials. The workgroup recommendations span five areas: study design, training providers, delivery of treatment, receipt of treatment and enactment of treatment skills. They are useful for investigators who want to measure and improve their treatment fidelity. The key areas for our study are design, training, delivery and receipt.

Fidelity in our PINTA study

Our feasibility study has several aims. The first is to estimate parameters for a fully powered clinical trial. Secondly, we also want to know whether our intervention works. As a complex intervention, it targets multiple levels – doctor and patient level. We hope to improve doctors’ practices and patients’ health behaviour. Intervention fidelity in a multi-level study means adhering to different guidelines and processes. Our trainers must deliver uniform training to all learners groups. The doctors must provide consistent interventions to all patients in the intervention group.

Availability of personal portable audio recorders, e.g. smartphones, provides new and exciting opportunities for fidelity checking, but it raises some ethical issues. Doctors and other interventionists can easily record their consultations with patients and email them to researchers for fidelity checking, but email is not safe.

To avoid the potential confidentiality breach, the researchers can ring the doctors, give them a one-sentence brief and ask them what would they respond should this patient appear in their next appointment. Recording such phone calls is not a technical or ethical problem; it is not without limitations, though. Telephonic consultation with researcher in the role of patient does not reflect real life consultations and, as such, cannot be an accurate skills check. Doctors may not want to be called and recorded for quality assurance purposes, even if it’s anonymous and does not affect their income or professional standing.

When designing measures to improve treatment fidelity in our study, we have to consider how they will be perceived by our participants and providers. These are the strategies for monitoring and improving treatment fidelity that we plan to use:

Design:

  • Guidelines for primary care providers to manage problem alcohol use among problem drug users
  • Scripted curriculum for the group training of providers

Training:

  • Booster session (practice visits) to prevent drift in provider skills
  • Access of providers to research staff for questions about the intervention
Delivery:

  • Instructional video of patient–doctor interaction to standardize the delivery
  • Cards with examples of standard drinks and scripted responses – to standardize the delivery
  • Question about patient scenario in follow-up questionnaires (telephone contact)
Receipt:

  • SBIRT checklist for providers (process measure)
  • Pre- and post training test (knowledge measure)
  • Patient follow-up questionnaire will check whether each component of the intervention was delivered

Measuring fidelity in trials of complex interventions is important. It is not technically demanding. Ultimately this becomes a question of personal development and credibility – willingness to have one’s skills analysed and improved is the basis of reflective practice.