This post comes from CPN member Carley King. Carley is a physiotherapist who has developed an interest in evidence based medicine during her Masters in Clinical Research. Here Carley reports on the recent debate on the value of Evidence based medicine at the CSP Congress.
Spoiler alert: I’m not sure that evidence-based medicine (EBM) as we understand it at the moment is fit for purpose. That’s my bias out in the open! But on hearing this opening line, I couldn’t help but allow a small part of me to wonder if it was ridiculous to even consider an alternative…a very clever debating ploy there!
As the debate progressed, it became clear to me that there were some key issues arising from this motion, and I’ve tried to distil some of them here.
Firstly, we haven’t quite decided what constitutes research evidence. The randomised controlled trial (RCT) is widely seen as the gold standard of research evidence. It is the only known way that we can exclude biases…but we are humans with inherent biases, so how valid is the information that RCTs provide us with? Most research excludes populations with multiple comorbidities, so can we extrapolate trial findings to our increasingly complex patients?
Is physiotherapy as a profession, with all its nuances, really keeping RCTs as our gold standard of knowing that something works, or should we consider that RCTs are just one way of knowing something might work? I’m by no means suggesting we discard RCTs…but we’re putting this method on a pedestal, despite knowing that physiotherapy in practice has so many confounding variables that will influence how effective a treatment is.
If RCTs are the gold standard in research, this suggests that there is a silver standard, bronze, tin etc. So where does physiotherapy research sit in this ranking of research methods? It’s quite difficult to fully randomise many physiotherapy interventions, so we’re immediately settling for second rate research in accordance with this stance…is that really good enough? Or should we be looking to find a way of generating knowledge that actually suits the practice of physiotherapy, rather than being so medicalised? I think it’s fairly widely accepted that we use a combination of biological, humanistic and social approaches in practice, rather than purely biomedical, but I’m not sure this is reflected in our approach to research – is this a chasm we need to try to bridge? And how do we go about that?
The concept of RCTs being the gold standard in research raised a point from the audience, about using other disciplines to help inform how we generate research evidence. Do we make enough effort in looking to other disciplines (like design, education, engineering, law, poetry, or sociology, for example), for inspiration on how we might generate knowledge that is suitable for our means?
The patient should be at the heart of everything we do in physiotherapy. So if we don’t use RCTs, how do we know that our treatment isn’t in fact causing harm? Should an intervention that isn’t necessarily seen as harmful, but hasn’t been proven to be effective, still be considered harmful? Is it harmful if you are providing a treatment that isn’t effective and delaying the appropriate treatment that has been shown to be effective? And is the question of effectiveness the only metric we want to use to determine our approach to patient care?
As physiotherapists, we are constantly fighting to make ourselves seen as a reputable profession, and this involves using interventions that are based on evidence. So if we’re saying that actually, interventions can be used without any research evidence, where does this leave our professional standing with other professions, such as doctors, and indeed the general public? Are we at risk of losing that credibility that’s taken so long to establish?
Or are we aiming for the wrong end point? Evidence based medicine originated from the medical field, not physiotherapy per say, we’ve just jumped on the bandwagon (not that I’m criticising this, I can certainly see why we have). So are we trying to make something fit like a square peg in a round hole? Do we need more of a conversation about what actually constitutes research evidence, and how this is operationalised in practice? Do we need to broaden our minds as to what can be seen as research evidence, without having this hierarchy of one study being seen as better than another, limiting the methodology of research being conducted (and published)?
When the final vote was cast, it was clear that the majority of the room rejected the idea that in the absence of research evidence, an intervention should not be used. That is not to say that evidence based medicine in itself was being rejected, but rather that we need a conversation around some of the questions being raised here, and in other forums. The number of questions posed in this blog alone suggests that there is much more thinking to be done around this, and I would love to hear others’ thoughts…let’s keep this debate going, and continue challenging the profession to think ‘otherwise.’
Blake Boggenpoel says
Hi Dave
Thanks for this piece.
I totally agree with you on most points that you make, however I feel that when RCTs are done researchers tend to create these “ideal” environments to test these interventions but this is not the case in real life as you clearly stated above. There should be a greater push to do more pragmatic trails than explanatory trails since findings would be much easier to generalize to the “real life setting” as compared to using the latter.
I do however think that if research indicates that an intervention is not effective it should not be used. In the case of electrotherapy, which has been used for many years within the physiotherapy setting ample studies have clearly indicated that there is no conclusive answer as to whether or not these modalities are really effective. Even after all this research has been done it continues to be used routinely within the clinical setting. Although these machines might not cause any serious adverse events I think in a way its not ethical to be treating a patient with a specific intervention that you know won’t improve that patients quality of life.
In closing I think we all should be equipped to critically analyze and appraise evidence as well as incorporate our expertise to make an informed decision as to what would be beneficial to our patients than to rely solely on what the research says.
xx says
I think there is an issue with application of research evidence base in that it is often applied without a linked quality assurance plan.
How do we know that the research evidence applied in a controlled environment is safe an effective for our individual population and patients (e.g. taking account of comorbidities, cultural differences, skills and experiences of clinician). The HCPC standards of proficiency (standard 12) states that physiotherapists need to “be able to assure the quality of their practice”. One component of this is attention to evidence based practice but the largest contribution to quality assurance relates to collecting and using information to evaluate what we do; as individuals and services. Quality assurance is often the missing link in evidence base but where it exists it allows patient centred practice and responsiveness to populations. It also allows development of individual practitioners and services.
At the very least, surely routine quality assurance activity is a gateway into formalised research activity.
Mart says
Begs the same old question as to what research is ‘best’ for evaluating physiotherapy interventions. With respect to nteventions such as electro therapy , as previously mention, we might be able to say it’s not significantly effective, but can we say it’s significantly ineffective. I think these are two different things.