It’s true. Randomized control trials are effectively the unicorns of the research world. Everyone talks about them, but few have ever actually seen a true RCT in the wild.
I presented at the Good Tech Conference in Chicago last month. At one point, an audience member asked a question about RCTs and why I called them unicorns. It’s really just my way of saying that, while many pieces of research claim to be based on the theory of RCTs, few truly achieve that goal.
Why does that matter? Data is data, right?
Unfortunately, a large collection of data that was designed to be a randomized control trial but didn’t quite make it is significantly less valuable to researchers than a collection of similar data designed using a more realistic method.
Building a True RCT
There is only one question randomized control trials can answer: what is the average treatment effect on the population? A true RCT answers this question very well — whether you’re studying an experimental new drug or the effects of a social program. But for your trial to be a true RCT, you must follow all technical protocols to the letter.
And that’s where many researchers run into problems.
Often, the reasons an RCT goes off the rails are valid. They’re almost always understandable. True RCTs come with their fair share of challenges:
- True randomization is a difficult thing to achieve.
- Correctly implementing data collection on randomized populations is expensive.
- There is a lot of work involved — often with little apparent benefit.
If your implementation team isn’t already enthusiastically on board (which can be a pretty big ask sometimes), it’s easy for small changes to creep into the project. And just like that, you go from true RCT to research that isn’t necessarily worth the paper it’s printed on. We make slight adjustments. We shift people or communities — just a few, sometimes — from one group to another to fit within budget constraints. And suddenly, our RCT is no longer.
Sometimes the factors that keep us from conducting a true RCT are even more subtle than the tweaks we make to keep things running. Many studies purporting to be RCTs fail to follow basic rules fundamental to a randomized control trial.
For example, if the health workers or program providers you’re working with are aware of which groups in your study are “treatment” and which are “control”, you’re no longer conducting a true RCT. There are confounding factors in a non-blind study (when treatment providers know who is receiving their treatment, it can affect the results of the study).
In addition, statisticians are increasingly recognizing the enumerator effect — understanding that the person who asks the question can affect the answer. (For example, a study conducted by a male interviewer may elicit very different answers than a study conducted by a female interviewer.)
The Gold Standard… of What?
Most research and evaluation projects don’t set out to become an RCT. But well-meaning funders, donors, and advisors put them into the RCT box without really understanding the constraints or implications of calling something an RCT.
Because they’ve bought into the belief that RCTs are the “gold standard.” There is one important question we must all ask when presented with this assertion: “The gold standard of what?” It’s true that RCTs are one of the best ways to get an unbiased average treatment effect for a population. However, if that’s not the question you’re answering, a true RCT is probably not what you’re looking for.
At Datassist, we specialize in helping journalists, nonprofits, and social sector organizations harness the power of data to change the world. Over the coming weeks, I’ll be continuing this theme of RCTs, talking more about issues, alternatives, and how to ensure our impact analysis really works. Stay tuned! If you have a specific question about RCTs or want help measuring your organization’s impact, get in touch today.
This post is the first in a series on RCTs. Check out the second installment, RCT Alternatives: Let’s Stop Agreeing to the Premise.