3.6% response rate? Shocking! For our new feasibility study, we sent over 200 invitations to primary care doctors in Ireland and the invitees sent us back a very strong signal. “We are not interested”, or “we are too busy”, or “we don’t have enough eligible patients”? Whatever the reason, the message remained the same: No, thanks.
Power in research experiments is about finding the truth. Experimenters want to know whether their drugs or treatments work. If the drug or treatment works and they give it to a group of people, some of them will improve, some won’t. There’s a lot of chance and uncertainty in any drug or treatment administration. If we want to know the truth beyond the effects of chance, we need to give the drug or treatment to the right number of people. There’s a formula for it, known to most statisticians. It depends on many things, like the size of the improvement that you want to observe in the treated group, or other confounding factors. The higher power in a study, the more likely it says true (see, e.g., Dr Paul D Ellis’, PhD site here).
A rule of thumb says that the more people are in the study, the higher the chances of finding a meaningful impact of the intervention. Common sense also tells us that the more people in the trial, the more representative they are of the whole population – the more confidence you can be that your results apply to all; except for Martians – unless you really want to study Martian citizenship.
The easiest would be to call some friends, doctors, and ask for a favor. This should work, but it’s not really scientific. Or you can shut down the study and conclude that it’s not feasible. Or you can do the study with the small number of interested participants. Or you can send another mailshot, a reminder, to all – sometimes that can help.
Clinical trials use elaborate methods to make sure that everybody does the exact thing as they planned. Measuring treatment fidelity is checking the agreement between study plan and practice. Some health problems require complex changes. How to measure fidelity in trials of complex interventions? Here are some ideas for fidelity checking.
Fidelity in our PINTA study
Guidelines for primary care providers to manage problem alcohol use among problem drug users
- Scripted curriculum for the group training of providers
Booster session (practice visits) to prevent drift in provider skills
- Access of providers to research staff for questions about the intervention
Instructional video of patient–doctor interaction to standardize the delivery
- Cards with examples of standard drinks and scripted responses – to standardize the delivery
- Question about patient scenario in follow-up questionnaires (telephone contact)
SBIRT checklist for providers (process measure)
- Pre- and post training test (knowledge measure)
- Patient follow-up questionnaire will check whether each component of the intervention was delivered
Measuring fidelity in trials of complex interventions is important. It is not technically demanding. Ultimately this becomes a question of personal development and credibility – willingness to have one’s skills analysed and improved is the basis of reflective practice.
Last days of my INVEST fellowship
Visiting research scholars make new friends quickly and parting is not always easy for them. I said bye in Portland (OR) five times:
First, I said bye to my writing group. This was my second group in the last 15 weeks. The first, 10-week course of prompt-based writing was a birthday gift from my wife. I enjoyed the first course so much that I decided to go for a second round. The new beginnings were difficult, because we had a new group and group dynamics; dynamics matters most in writing groups. By the 3rd-4thmeeting, the group juice started to flow and we shared more and more feedback on our writings. Parting with the second group wasn’t easy, but much smoother thanks to my experience with the first group; I felt I belong there.
My point here – that saying bye slowly makes parting easier – should interest most visiting research scholars. Beyond this limited audience, however, my point should speak to anyone who faces parting with many good friends.
In our new new paper, we outline plans for doing a study which should tell us whether doctors and agonist patients accept psychological interventions as means of curbing alcohol in primary care; it should also tell us whether we can do more research on this topic in Ireland. Access the full protocol here http://www.researchprotocols.org/2013/2/e26/
For some people, publishing research protocols is not fun because of two reasons:
- everybody knows what you’re doing
- you have to do what you said – everybody knows now.
However tough for researchers, these two reasons make publicly available research protocols the best way to achieve transparency in research. Transparent research is in line with ethical principles of research conduct and makes an honorable contribution to the scientific knowledge – to the honor pot. Together with accountability, it should be the core pillar of scientific discovery.
If these safeguards fail, we may see more instances of academic fraud and data falsification, such as Diederik Stapels’. The social psychology community has been embarassed by the revelation that Diederik Stapels made up the data for his papers. The NY Times link provides a detailed analysis of the Stapels and his academic fraud.
the screening and treatment processes should be more systematic and proactive in all problem drug users, especially in those with concurrent chronic illnesses or psychiatric co-morbidity,
lower thresholds should be applied for both identification and intervention of problem alcohol use and referral to specialist services,
special skills and specialist supervision is required if managing persistent/dependent alcohol use in primary care.