Monday 13 July 2015

Potential Pitfalls in Evidence Based Medicine

In my last post, I talked a bit about why we need to use evidence in medicine. However, much as I support using evidence wherever possible, I can also see that there are a number of things that can go wrong when using EBM approach. Most of these are not due, per se, to EBM, but are down to issues with its implementation. Nonetheless, I think they're worth discussing.

One of the biggest issues with evidence is knowing how to apply it. A study may say something that sound potentially interesting, but it's important to work out whether the result actually applies to the patient sat in front of you before changing your practise. Was the study you're reading carried out exclusively in 50 year old men with high blood pressure but no other co morbidity? That doesn't mean that the 65 year old diabetic women in front of you won't benefit from the intervention studied, but it does mean that the evidence is less applicable to her and you should think carefully about applying it to her case. Lots of studies look at extremely specific groups. This is to reduce the likelihood of "confounding variables" - things other than the intervention which may result in a difference in outcome between study groups. However, the flip side is that the study result may not apply to those who differ from the specific group looked at in the study. It's therefore well worth having a good look at the inclusion criteria for participants in trials and bearing in mind that the results might not automatically apply to all of the patients you see.

A related issue arises when we look at guidelines. Clinical guidelines are available for many, many conditions now, and provide advice on interventions, investigations, referrals etc. In the UK, most of these are issued by The National Institute for Health and Care Excellence and, in Scotland, the Scottish Intercollegiate Guidelines Network. Guidelines are usually devised by a group of professionals appraising the available evidence - basically, they've done the hard work for you and read through all the evidence to determine what the best thing to do is in a number of situations. They will usually reference the evidence they used, should you wish to read it for yourself, and also tell you how strong the evidence is behind each recommendation. However, they are not hard and fast rules, they don't replace clinical decision making and they certainly don't cover every eventuality. Use them, just don't do so without thinking.

An issue it's also worth talking about is the difference between clinically significant and statistically significant. Statistical significance in most medical (and other) science) is usually taken to mean p <0.05. This means that there is less than a 1 in 20 chance that the result occurred by chance; in other words, it's likely that any difference in outcome between groups was down to differences in intervention rather than just being coincidental. Statistical significance is important because it's how we know that our interventions have actually done something. However, this has no reflection on whether the difference in outcome will make any kind of difference to a patients health, well being or long-term risks. This is another important thing to bear in mind before advising or prescribing an intervention based on evidence; will the outcome matter to my patient? An example where this becomes important is when thinking about statins, a group of drugs which lower cholesterol. There is good evidence that (a particular group of) patients who take statins are significantly less likely to suffer a stroke or heart attack within 20 years than those who don't. This sounds great, but if you have an octogenarian sat in front of you, does this really matter? They are unlikely to live another 20 years, so is it worth adding to their drug burden, with all the risks of side effects and drug interactions this brings? I'm not saying don't, just that you should be realistic about what the benefits of this will be to your patient. Maybe discuss the risks and benefits with them and see what they think.

There are other important factors to think about too when looking at a paper/trial/study. Rather than go through all of them, it makes more sense to hand over to the experts at this point. There are really useful study appraisal checklists available on the CASP (Critical Appraisal Skills Programme) website, which guide you through the things you should ask yourself when you're considering the value of a piece of research.

I've hopefully discussed the main issues that occur when trying to implement evidence. In my next post, I plan to talk more about what we actually mean by "evidence" and how we can decide whether one piece of evidence is more or less worth using than another.

No comments:

Post a Comment

And how does that make you feel?