Subscribe To Our Newsletter

Get tips and tools to tell your data story better.

No, thanks

 In Case Studies, Current Events, Experts, How To

Today I want to talk about possibly my favourite randomized controlled trial ever. As regular readers of this blog will know, I’ve spent a lot of time thinking and writing about RCTs — and the problems with RCTs. There’s a sort of widespread misunderstanding surrounding them that makes using them a bit problematic.

Before anyone starts jumping all over me, I will include my usual disclaimer. I think randomized controlled trials are great at what they do. Unfortunately, the general public doesn’t understand what that is. There is broad confusion about RCTs, I think, due to many people still struggling to understand things like probability, uncertainty, research questions, and other factors that play into RCTs. I also think the massive marketing and branding efforts associated with RCTs in some sectors share some blame for at least part of the confusion.

The RCT I’m going to talk about today is real. It’s been published in a reliable, peer-reviewed journal It answers its research question perfectly and is highly replicable. And if you don’t read carefully, think about it critically, and apply all those highly accurate findings in precisely the right way, a lot of people will die.

 

Parachutes Don’t Work

The RCT in question allowed researchers to determine that a parachute is no more effective than an empty backpack at protecting you from harm while you jump from an aircraft.

Wait, what?

I’m not making this up. Cardiologist Robert Yeh, Harvard Medical School associate professor and attending physician at Beth Israel Deaconess Medical Center, ran a randomized controlled trial with some of his colleagues that conclusively showed no difference at all between wearing a parachute or an empty backpack when jumping from a plane or helicopter… parked safely on a grassy field.

Yeh was trying — and doing an excellent job, frankly — to illustrate some of the problems with RCTs. He wanted to highlight the importance of context and understanding how an RCT is designed and conducted. His test was highly scientific and accurately answered the question it asked. But if a person read just the headline or the first few sentences, they would likely reach some wildly inaccurate conclusions.

Understanding the Problems with RCTs

“We have to look at the fine print, we have to understand the context in which research is designed and conducted to really properly interpret the results.”

~Dr. Robert Yeh

One of the biggest problems with RCTs is the human tendency to generalize based on a quick overview of the results. For a very clear demonstration of this, we only need to look as far as the Twitter responses to the publication of the parachute RCT story. Responses ranged from those who clearly got the joke to those who quickly proved its point.

“Somebody paid for this study?”

“I wonder if the authors of this study which shows that parachutes do not work got into any trouble.”

“There is also something called “Booths Law” in skydiving, where as gear/equipment becomes safer, skydivers will take higher risk thereby keeping the fatality rate constant. Great when considering older studies over long time periods where background stats may have changed…”

“The author has tried this themselves and proved their study is valid?”

The paper, published in the most recent holiday issue of the BMJ, does make clear that participants in the trial only jumped two feet — and from a non-moving aircraft. But it doesn’t reveal that particular tidbit until the fourth paragraph. Anyone who assumed they understood the study before reading that far missed the point entirely.

 

Make Sure You See the Full Picture

There is nothing inherently wrong with randomized controlled trials. The problems with RCTs arise when we don’t understand their context — and proceed as though we do. Ask yourself some critical questions before blindly putting your faith in something just because it’s presented as a “clinical trial.”

  • What question are the researchers asking? Is it what I think it is? Is the question being asked the same one that the trial answers?
  • Am I (or are the RCT administrators) making judgments about individuals? (RCTs are great for showing us averages — not so good on specifics.)
  • Who participated in the RCT? Who was excluded, and why?
  • How were participants selected and divided into groups? Was the trial truly random? Are there other factors that could influence the results?
  • Do I really understand what’s being said? Have I read the fine print to understand the limitations of this particular RCT?

Want help developing an RCT? Want to learn more about the problems with RCTs and how to avoid them. We’re here to help. Get in touch with the experts at Datassist today.

Recommended Posts

Start typing and press Enter to search

We All Count - a project for equity in dataA missing data biography — especially from a well-known source — is troubling.